Network Coding for Streaming of Scalable Video

Summary: Innovative coding technique for delivering scalable video on overlay networks.

Reference Code: 0404

This research product addresses the problem of delivering scalable video on overlay networks. TV broadcasters and video on demand providers need to serve a user population that can grow rapidly (peak times, special events) and sometimes unpredictably (news flashes, etc.), making resource allocation difficult and expensive.

While IP-multicast is a solution that brings less end-to-end delay, it is virtually impossible to scale it to very high numbers of users. End-system and application-layer multicast distributes the load among several machines, instantly allocating as much resource as needed for current demand, but the main problem here is in distributing the scalable video within a time window whilst using resources efficiently and securing against eavesdroppers.

This framework employs local processing in the multicast nodes. Network coding maximizes the use of resources, both in terms of allocated processing power and network bandwidth (up to the theoretical limit of the network).

Distinctive aspects of the research

The product deploys with very limited feedbacks to decrease end-to-end delays at large scale. Additionally, with this coding technique the delay is only linearly affected by packet losses, and the final quality virtually unaffected.

The network coding also secures the data against unauthorized eavesdropping of the transmission. Scalable video coding is processed locally, allowing great flexibility in terms of different services delivered to users.

Only Microsoft's Avalanche has been commercialized using similar principles. Microsoft’s approach was proposed for data dissemination and downloading on peer-to-peer overlays, not for streaming. The software was acknowledged to carry high overhead.

Potential applications

Video on demand or live TV broadcast via:

• Content delivery networks;

• Peer-to-peer.

Partnership sought

The framework is ready for commercial exploitation. Only back-end and front-end programming skills would be needed to produce software to run on any commercial PC, laptop or tablet.

LinkedInTwitterFacebookShare

Improved Inter-Prediction for HEVC Video Coding

Summary: Improved video compression via enhanced inter-prediction in the HEVC video codec, using shift transformation to reduce complexity and improve prediction accuracy.

Reference Code: 0403

Production and consumption of digital video has increased dramatically in recent years. This enormous flood of data creates the need for better frameworks for compression and transmission, in the form of video codecs (coder-decoders). Recently the HEVC (High Efficiency Video Coding) codec standard has been ratified, and is due to become the successor to the current state-of-the-art H.264/AVC. The new standard is a good starting point for implementing enhanced techniques to improve encoder components and achieve the highest possible coding efficiency.

Video coding makes use of inter-prediction, the exploitation of similarities and redundancies in successive video frames in order to achieve efficient coding and compression. Three types of redundancy are exploited: spatial, temporal and statistical. Inter-prediction enables a codec to achieve a good balance between video quality, bit-rate and coding complexity.

An Enhanced Inter-Predictor (EIP) module is presented for improving the coding efficiency of a conventional video encoder by means of local parametric transformations of the prediction candidates before transform and quantization. The optimal transformation parameters are compressed and transmitted in the encoded bit-stream. This new technology can provide considerable gains in encoder performances with little impact on coding complexity.

Distinctive aspects of the research

Inter-prediction based on block-based motion estimation is used in most video codecs. The closer the prediction is to the target block, the lower is the residual and the more efficient the compression. This enhanced inter-prediction method improves on-the-fly prediction of candidates by applying an additional shifting transform while performing motion estimation.

Potential applications

All applications where digital video coding is required, from Internet video streaming platforms to video production tools or digital TV content providers.

Partnership sought

The technology requires the baseline decoder to be modified, therefore it could only be used in commercial applications after inclusion as an official extension to the standard. This could follow successful completion of R&D and testing. Estimated effort one year.

LinkedInTwitterFacebookShare

3D Discrepancy Analysis for Safety Critical Installations

Summary: Computer vision based 3D reconstruction and discrepancy analysis tool supports precise positioning and alignment of safety critical installations in the aircraft industry and elsewhere.

Reference Code: 0401

A unique approach to safety audit of equipment installations combines stereo vision and CATIA model knowledge in a seamlessly integrated system. The discrepancy analysis tool enables identification of differences and non-conformances – to millimetre level accuracy – between an actual installation and its original CAD specification.

In the aircraft industry and elsewhere, precise positioning and alignment of installations is often critically important for safety. This tool offers a solution to problems of imprecise installation by enabling repositioning of components with reference to the original model.

The approach has been successfully evaluated against standard datasets.

Distinctive aspects of the research

Unlike existing industrial solutions which rely on scanners to obtain 3D data for performance audit, here a 3D object reconstruction is generated from multi-view stereo images. Further, CATIA semantic information is used as add-on knowledge to support discrepancy analysis.

The method introduces an improved GrabCut segmentation technique that uses both colour and depth information. (GrabCut, developed by Microsoft Research, is an interactive tool for foreground segmentation in still images.)

Potential applications

Safety audits of aircraft installations and other industrial installations where accurate positioning and alignment of components is critical.

Partnership sought

Industry partner sought for commercial application.

LinkedInTwitterFacebookShare

Ubiquitous Computing with Google Glass

Google Glass is the name of an augmented reality product developed at Google's secretive research lab, Google X. At first glance, the new device looks like a conventional spectacle frame, but it carries computer chips, a miniature battery and heads-up display (HUD) enabling the wearer to interact with virtual space whilst on the move.

As with the pilot displays of modern aircraft, text and images generated by Google Glass are overlaid on the real world scene so that the user can switch attention instantaneously between the two. Activated by voice, touch or head movements, the 'Goggles' allow interaction with apps, the Internet and social networking sites, communicating all the while via Wi-Fi or Bluetooth to the user's mobile.

A tiny camera located just above the eye enables lives video streaming from the user's own viewpoint, giving recorded experiences a stunning immediacy:

Keeping users connected whilst minimally distracting them from real-world tasks, the new Goggles are a step closer to the goal of ubiquitous computing – complete integration of computing into everyday objects and activities – and present a significant challenge to smartphones. But whether they can reduce the present toll of jaywalking accidents and lamppost collisions remains to be seen!

Google Glass is due to hit the shops in late 2013 or early 2014.

LinkedInTwitterFacebookShare

EMC2 at the UK’s Leading Innovation Networking Event

EMC2 is delighted to announce that it will be an exhibitor at the new, multi-sector networking event ‘Innovate UK’. The joint venture from the Technology Strategy Board and UK Trade & Investment unites the highly successful Innovate and TechWorld events and will showcase cutting edge innovation from across the UK.

Handpicked from a pool of the ‘best of the best’ in British creativity and innovation, EMC2 will showcase leading-edge 3D technology research from top-flight European research institutions to an estimated 4,000 UK and international visitors, at Innovate UK. In this way EMC2 aims to forge new research-industry partnerships for development and commercialisation of technology innovations.

As well as showcasing new research the EMC2 exhibit will show visitors some of the innovative work our organisation has been doing in the area of entrepreneurship training for doctoral and post-doctoral researchers.

Interested companies will be invited to join the EMC2 network, enabling them to keep in touch with future technology research in the fast-changing field of 3D media computing and communication.

Innovate UK, which takes place on 11-13 March 2013 at the Business Design Centre in London, aims to drive economic growth. By bringing together individuals from UK and international business, government and academia, Innovate UK will help companies identify new investment, international trade, innovation and collaboration opportunities through networking.

Iain Gray, Chief Executive of the Technology Strategy Board said: “All our exhibitors are specially selected by the Technology Strategy Board and UKTI to feature at the exhibition and can demonstrate truly innovative products, services or technologies that have been developed here in the UK. We’re very pleased EMC2 is taking part in the event and believe it is a great representation of the UK’s strength in innovative technology research.”

Nick Baird, Chief Executive of UK Trade & Investment said: "Over the last two years, businesses that attended Innovate and TechWorld collectively generated over £70m worth of UK trade; 65% of them identified new business opportunities and 77% said they learned something that would help them to innovate. By bringing the events together we hope to deliver even more success for UK businesses whether by helping them grow through innovation, international trade and investment or collaborative opportunities.”

Delegates attending Innovate UK can take advantage of one to one meeting opportunities, seminars, workshops, an exhibition and keynote sessions featuring leading business and ministerial representatives. During the three days, participants can share knowledge, access expert advice, showcase innovation and find collaborative partners.

Tickets cost £125 (including VAT). For more information, visit www.innovateuk2013.co.uk, follow on Twitter @InnovateUK or join the conversation on connect.

LinkedInTwitterFacebookShare

ACM Multimedia 2013 Barcelona: Deadlines Approaching!

Submission deadlines are fast approaching for the 21st ACM International Conference on Multimedia in Barcelona, Spain, 21–25 October 2013.

ACM Multimedia is the premier worldwide multimedia conference and the key event for presenting scientific achievements and innovative industrial products. The conference offers scientists and practitioners technical sessions, tutorials, competitions, panels and discussion meetings on relevant and challenging research questions. In addition to a strong technical program, ACM Multimedia 2013 will provide contexts for interaction between artists, research scientists and practitioners with the aim of reflecting on the impact of multimedia technologies on contemporary digital culture.

Call for Papers

ACM Multimedia 2013 calls for full papers presenting interesting recent results or novel ideas in all areas of multimedia and its applications. At the same time, the conference calls for short papers presenting interesting and exciting recent results or novel thought-provoking ideas that are not quite ready, and preferably include a system demonstration.

  • Abstract for full papers: 1 March, 2013
  • Manuscript for full/ short papers: 8 March, 2013

Call for Tutorials

ACM Multimedia 2013 tutorials will address state of the art research and developments in all aspects of multimedia, and will be of interest to everyone from novices to seasoned researchers, from academics to industry professionals.

Proposals are invited for tutorials of either a half day (3 hours plus breaks) or a full day (6 hours plus breaks).

  • Tutorials submission deadline: 8 April, 2013

Call for Brave New Idea Papers

'Brave New Ideas' should address long-term research challenges, point to new research directions, or provide new insights or bold perspectives that pave the way to innovation.

ACM Multimedia 2013 seeks contributions that explore innovative approaches and paradigm shifts in conventional theory and practice of multimedia techniques and applications.

  • Proposal for papers (two-page abstract in ACM format): March 15, 2013

Call for Papers For Doctoral Symposium

The ACM Multimedia 2013 doctoral symposium will provide PhD students working in all areas of multimedia with a unique opportunity to interact with renowned and experienced researchers in the field, and to discuss with them about their research.

PhD students are invited to send a short paper related to their ongoing research.

  • Doctoral symposium submissions deadline: April 5, 2013

Call for Panel Proposals

ACM Multimedia 2013 invites proposals for panel sessions on timely and potentially controversial issues in all areas of multimedia, including content analysis, systems, emerging applications, industry trends, technical breakthroughs, user needs and social aspects.

  • Panel proposals submission deadline: April 8, 2013

Call for Workshop Proposals - Regular Workshops

Regular workshops should focus on current or emerging topics of broad interest to the ACM Multimedia community. They should allow members of the community to compare and discuss approaches, methods and new concepts on research topics pertinent to the main conference.

  • Proposals submission deadline: February 15, 2013
LinkedInTwitterFacebookShare

Accurate Structure Segmentation in Brain MR Images

Summary: An innovative method for highly accurate hippocampus and amygdala segmentation in MR images, based on the modelling of boundary properties at each anatomical location and the inclusion of appropriate image information.

Reference Code: 0104

Image processing tools are currently used as software add-ons for MRI (Magnetic Resonance Imaging) scanners. In the field of neuro-imaging, brain image segmentation tools help guide the doctor/ radiologist to regions of the brain that need to be investigated further. Brain segmentation tools are also applied in medical research, e.g. for morphological analysis of brain structures through which disease-related indicators can be identified.

Existing tools (e.g. Analyze) are based solely on brain atlas information, and take no advantage of current active contour based techniques. Active contours take advantage of image information and by appropriate blending with prior knowledge can be very efficient and accurate. Thus the present challenge is how to better model prior knowledge and insert it into the active contour evolution framework, and how to use this in combination with atlas information.

The researchers have developed an innovative method for highly accurate hippocampus and amygdala segmentation in MR images, based on the modelling of boundary properties at each anatomical location and the inclusion of appropriate image information. Blending of image information and prior knowledge is based on a local weighting map which mixes gradient information, regional and whole-brain statistical information with a multi-atlas-based spatial distribution map of the structure's labels.

Potential applications

• Support for doctors and radiologists examining MRI scans, who need to quickly locate the regions of the brain that should be investigated further.

• Tools for accurate segmentation of brain MRI scans to support medical researchers in morphological (or other) analysis of brain structures.

Partnership sought

There is strong potential for collaborative development. The experiments so far have been conducted on 2D slices of a specific pair of brain structures (hippocampus and amygdala) but the method is being extended to 3D and for application to other brain structures. Partners are sought for this research and development effort, estimated at 1-2 years.

LinkedInTwitterFacebookShare

Yet another audio features extractor… a great one!

Summary: YAAFE is an efficient toolbox that computes audio features out of audio signals.

Reference Code: 0301

For numerous applications involving audio signals, it is necessary to extract in an efficient manner audio features (or audio descriptors). YAAFE is an efficient toolbox that computes such audio features.

Distinctive aspects of the research

YAAFE stands for "Yet Another Audio Features Extractor". It’s a program designed for efficient computation of many audio features simultaneously. Audio features are usually based on intermediate representations (FFT, CQT, envelope, etc.); YAAFE automatically organizes the flow of computations so that these intermediate representations are computed only once. Computations are performed block per block, so YAAFE can analyze arbitrarily long audio files.

Potential applications

  • Audio indexing systems (music information retrieval, multimedia database search, etc.).
  • Audio processing system including audio scene understanding.

Partnership sought

The toolbox is available as source code and can be used to derive a product. An extension of YAAFE that provides higher-level audio descriptors is also available (source code shall be available based on a specific agreement).

Resources and information

http://yaafe.sourceforge.net/

LinkedInTwitterFacebookShare

Real-time Kinect-based 3D Reconstruction of Moving Objects

Summary: A low-cost system for full, real-time 3D capture and reconstruction of moving foreground objects and humans using multiple Kinect sensors and a single host PC.

Reference Code: 0107

Robust, realistic and fast 3D reconstruction of objects in real-life scenes is a challenging problem, especially for moving objects, but is important for applications ranging from cultural heritage (e.g. reconstruction of museum objects) to the movie industry and VR/ AR applications.

Most current approaches synthesize only intermediate views for given viewpoints rather than generating complete 3D models, and are therefore unsuitable for use with modern stereoscopic or holographic display systems.

Creating complete and realistic 3D models in real time has required: (i) either a large number of RGB cameras or expensive stereo-camera pairs, or (ii) high computational power, i.e. a large number of host PCs or expensive computing stations. There is a trade-off between accuracy and speed (or cost).

A capture system and method for accurate, full 3D reconstruction of moving foreground objects and humans has been developed. The system uses multiple Kinect sensors and the software exploits the capability of current GPUs (Graphical Processing Units) to run in real time on a single host PC. Achieved reconstruction rates are as high as 15 frames per second.

Distinctive aspects of the research

With this solution the overall cost is low, due to the low cost of Kinect and exploitation of the computational power of GPUs, and quality of reconstruction is adequately high, given real-time demands.

Potential applications

• Tele-immersion: Future Internet will support immersive, collaborative environments wherein accurate 3D reconstructions of humans (or personalized avatars of them) interact.

• 3D gaming: Participation in 3D games through real-time 3D face/ body reconstruction or personalized avatar.

• Virtual clothing applications.

Partnership sought

There is potential for further research and development work, estimated at 1-2 years. Some improvements to the software and methodology are required.

LinkedInTwitterFacebookShare

Image-based Product Search Using Smartphones

Summary: Smartphone application for searching online product catalogues by sketch, image or text query.

Reference Code: 0108

Internet technologies have dramatically changed shopping behavior: instead of visiting real shops, users can browse and order from online product catalogues. However, the vast amount of digital content available online makes searching increasingly difficult.

Text-based search engines are limited in that text is not always capable of representing what the user is looking for, e.g. when products are annotated in a different language or the user doesn't know the name of the desired item. An alternative approach allows the user to specify it by providing a sketch or image of a similar product.

Several commercial applications already offer "search by recognition", but unfortunately these tools are for a limited number of application areas and in some cases their retrieval accuracy is poor.

The application presented here enables product search using a smartphone giving as queries text, stored images, photos taken by the smartphone camera or interactive drawn sketches. Additional features enable refinement of search results by price, user location or specific product features (size, material, colour, etc.).

Distinctive aspects of the research

Cutting-edge technologies for efficient multimedia similarity matching ensure more reliable retrieval of relevant products.

Potential applications

Sketch- or image-based search of online product catalogues, e.g. furniture, clothing, DIY.

Partnership sought

The research and development result is mature and ready for application, subject to minor improvements in the querying and presentation interface. The image-based search algorithms are being further improved.

LinkedInTwitterFacebookShare