The Future of Live TV 4

We’re working with our customers to share with you their stories and insights, to offer you a rare glimpse into the future of systems from some of the world’s most exciting and innovative industries and development teams. Enjoy!

The Future of Live TV Production and Broadcast - The Content Experience

Bryn Balcombe, CTO at London Live on The Future of Live TV

 

By Bryn Balcombe – CTO, London Live

The Future of Live TV Production and Broadcast needs to be defined by the expectations of the consumer. This is not just about content access. It’s about the content experience itself.

Filmmaking recently underwent a similar change that has set the baseline for these new expectations. Progressive filmmakers such as George Lucus only saw computing as a tool to enhance live-action films. However, a small team, supported by Steve Jobs, departed Lucus Films to form Pixar in the belief of a different future; the ability of software and compute power to create feature films. The result was Toy Story released 20 years ago [1].

After 16 years of research for one of the world’s largest sports rights holder I believe live sports productions have reached the same tipping point. The most progressive sports broadcasters only enhance live productions, but we now have enough advancement in software and processing power to capture 3D worlds, real life in real-time, and create completely new live event experience using this compute power.

This is what happens when the Industrial Internet of Things (IIoT) is applied to Live TV Broadcast. Without recognising this potential the TV Industry risks having its own Kodak moment [2] as new competitors such as Apple, Google, Amazon become uniquely positioned to deliver this future.

The Future of Live Event Experiences

Live sports will be the first battleground, but this shift will impact the commercial foundations of the entire broadcast industry[3]; competitors will enter the sector with a completely differentiated offering to existing linear TV broadcast that opens up new market opportunities for right holders and platform providers.

The capture of real 3D worlds in real-time sets the stage for radically different user experiences: watching major events from any perspective you choose – live, cinematic views from any location at any resolution. Anyone, anywhere to control the action, the stories to follow, anyone can direct, anyone can fly the camera, it’s personalised or shared, it’s lean back or lean forward, it’s live or time-shifted, it’s mobile, tablet, game console, TV, cinema, VR (Virtual Reality).

The Future Of Live TV - Bryn Balcombe - 3D Live Compute

Fig. 1 The Integration of 3D compute into Live TV has already started.

These completely new live event experiences can only be created using computing and will uniquely target the capabilities of modern end user devices; touch screens, gyros, controllers and voice interactions that have become natural to young and old.

The IIoT enables converged content acquisition for a diverged audience.

Current Industry Direction

The broadcast TV industry is picture quality driven – increasing resolution (HD, 4K, 8K), High Dynamic Range (HDR), Wide Colour Gamut (WCG) and High Frame Rate (HFR) are all future directions being considered[4]. Ever increasing communication speeds will be required.

The industry bodies are looking towards IT networking hardware to meet the demand – leveraging R&D investments that have been driven by other industries while broadcasting was creating it’s own single purpose hardware appliances.

Bandwidths are driving COTS (Commercial Off-the-Shelf) IT adoption in TV Broadcast – not compute. However, new experiences can only be created with compute.

The TV Industry has recognised it is being out paced by the development of IT networking R&D investment. The same is also true about communication protocols and processing; the combined efforts of multiple industries are perhaps best defined by what IBM describe as “the next paradigm in computing – data centric systems”[5].

This system of systems thinking is currently missing from the TV Industry’s standardisation approach but it’s fundamental to the future of Live TV Production. Not only is the industry relying on human image interpretation but also upon human communication and human control.

Our research at London Live leads us to believe that a standardised middleware called the Data Distribution Service (DDS) delivers the right mix of data-centric, open standard, real-time communication. Not just for carrying information between systems but also for the individual images themselves – each frame uniquely identified, so that it can be analysed, correlated and manipulated in isolation, or in time or spatial groups. London Live is now working towards creating a fully reactive TV studio environment based upon this open standard.

The Future Of Live TV - Automated Robotics Integrated into Live TV Broadcast

Fig. 2 London Live – Integrating automated robotics into live TV broadcast

TV’s Kodak Moment?

The full implications of the shift towards a computing based future are not yet addressed and some analysts predict TV broadcast risks having its Kodak moment.[6]

Without considering the requirements of future user experiences, the industry risks locking itself into standardisation that will, ultimately, limit what is possible.

Mapping of 3D worlds is the direction experiences are moving in. Ari Emanuel, co-CEO of William Morris Endeavor (WME) and IMG, an entertainment, sports and media agency, has stated: “The future of media and entertainment is not going to be a flat screen. OTOY[7] is building the content pipeline for the next generation of movies and computer graphics, where immersion and presence will be a key axis of the creative process.”

Is that where the Live TV Broadcast Industry is heading?

In 2014 GE Chairman and CEO Jeff Immelt made a powerful statement about the impact of the IIoT future “If you went to bed last night as an industrial company, you’re going to wake up this morning as a software and analytics company.”[8]

The major risk is that TV industry’s shift to a software future is being based upon foundations that assume human interpretation of images – foundations that do not scale. Just as proprietary broadcast systems are designed to make these human processes efficient – but not to change them – the current approach to IT adoption runs the same risk.

New competitors

If broadcasters don’t create new experiences others will – Apple, Amazon, Google are the competitors now.

The key asset is no longer the TV broadcast transmission to the home via terrestrial, cable, satellite. The key asset is the cloud-based ecosystems competitors are building. They compete on the total experience that can be delivered using this infrastructure. The experience now combines the user devices (they design), the OS and App software (they write) and the core data centres (they’ve built). They compete on the intellectual property that differentiates the total experience.

Content of all types is layered on top.

The future of Live TV is compute-centric and Live TV communication has to become data-centric to enable new consumer experiences.

Footnotes

[1] President of Pixar Animation and Disney Animation, Ed Catmull, Creativity Inc. Over coming the unseen forces that stand in the way of true inspiration

[2] North River Ventures http://www.northriver.com/ventures/broadcast-tv-dead-man-walking/ & Profiting From the Cloud Membranes Part Four (http://vimeo.com/102073844 @ 3m50s) My only point of difference is that I do not see live sports as a form of protected linear media services – in fact I believe that the application of compute to live event capture (sports or news) will be the industry tipping point – perhaps these will be referred to as “spatial media services” rather than linear/non-linear.

[3] Other live events, including global news, will follow.

[4] https://tech.ebu.ch/docs/techreports/tr028.pdf

[5] http://ibmresearchnews.blogspot.co.uk/2014/11/data-centric-systems-new-paradigm-for.html

[6] North River Ventures http://www.northriver.com/ventures/broadcast-tv-dead-man-walking/ & Profiting From the Cloud Membranes Part Four (http://vimeo.com/102073844 @ 3m50s) My only point of difference is that I do not see live sports as a form of protected linear media services – in fact I believe that the application of compute to live event capture (sports or news) will be the industry tipping point – perhaps these will be referred to as “spatial media services” rather than linear/non-linear.

[7] http://render.otoy.com/newsblog/?p=547

[8] Third “Minds + Machines” Summit http://youtu.be/djB6BmBda6Q & http://fortune.com/2014/10/10/ge-data-robotics-sensors/

4 comments

  1. Yeah, no. I was a senior executive at a major broadcast network and tested things like this over and over. Epic fail. You mention George Lucas. Lucas was not focused on technology, just alternative methods to serve a dramatic story. The idea of compu-centric is totally off base. We tested this kind of thing, people almost never used it. Why? It is structurally drama-centric. Propeller heads working in labs don’t have the answer. This is not the future. Harvard University already issued papers on the subject backing our research along with several other universities. Leave the storytelling to the storytellers!

    It would be a great idea, if it wasn’t already tried and tested by leading content producers years ago. Technology is not the problem; neuroscience is.

    Compucentric. Funny. There is your fatal flaw.

    Like

    • Hey Carl, thanks for your comments.

      Neuroscience and narrative structure are absolutely the right foundations – in fact “Narrative Interaction for Live Events” is a particular interest of mine. Without understanding story structure and how it is comprehended you cannot create a compelling visual experience. Will computers be able to fully automate film or drama production better than the best directors and creative teams in the world? I don’t believe so either.

      IBM Deep Blue beat a Garry Kasparov at chess in 1997, IBM Watson won Jeopardy! in 2011 – but both are logic and information challenges – not creative endeavours. So I’m in total agreement. My reference to Lusas and Pixar films was therefore misleading. It was only meant to highlight that Pixar believed an audience would watch a full length feature film which is “rendered” by computers. The storytelling, the creative detail, the film’s emotion, that the audience connects with, comes from the amazing talent of the people within Pixar. Computer rendering is not perfect but since 1995 it has been good enough to convey story – no cameras required.

      Live sports are different to drama/films. Narratives are formed from events taking place in real time in the real world. The role of the Live TV production is to represent rather than create narratives. However, in that representation comes distortion. Ask an Italian director to film the Italian F1 Grand Prix for an Italian TV audience and the rest of the world will notice a Ferrari bias in the coverage. The solution to date is to create an “international feed” without bias – but the production team still has to identify, rank and select which narrative to represent at any given moment – there remains only one presentation to the entire global audience.

      Giving the viewer more video feeds from the event and letting them select has been tried, at various scales. In my mind these approaches are flawed for exactly the point you make Carl – they are not structured at a narrative level. How can a viewer realistically structure a narrative from 65+ video sources in realtime when we can only attend to a small part of one image at any moment? In fact the narrative structure of a F1 race is actually read from positioning and timing information displays. This is information is then used to direct the cameras to follow a specific narrative – so even if you could see all source cameras there is still narrative bias in those preselected camera viewpoints.

      Fundamentally all sports are spatial events. Realtime in three dimensions. Yet our representations are only 2D with momentary jumps in 3D space between preselected camera locations and viewpoints.

      The interactive photorealistic 3D cites within Apple Maps or Google Earth are only missing one thing – live action. These are already computer “rendered” experiences and they allow personalised viewpoints to be selected by the individual user.

      However, unless there is a fundamental change in the way image sensors are used to capture these live events this type of experience will be impossible to create.

      Carl, once this foundation is in place I fully expect a resurgence in the importance of narrative within this medium – specifically to create lean back experiences. However, at this point a Ferrari centric portrayal of the race would not prevent an alternate one focused on a specific driver. In fact these virtually directed experiences have more creative freedom as they will be able to choose any camera location to illustrate the narrative – even those that would be impossible without interfering with the field of play.

      In a lean forward viewer controlled viewpoint experience – I’d also fully expect interaction controls to operate at a narrative level not just allowing you to randomly pilot a virtual drone over an event.

      I hope that’s added some more context. It’s an exciting future but it requires new image capture and processing foundations.

      Many thanks,

      Bryn

      Like

  2. The fatal flaw is all of this content on a unicast architecture. Once we give TCP/IP the underlying broadcast architecture it needs, then the playing field will have shifted against the set top box.

    Like

  3. With the ability of DDS to utilise UDP/IP, TCP/IP, TLS, DTLS, Shared Memory and be extended, via pluggable transport API’s, for Infiniband, PCI/Express backplane or custom transports – it is well placed to play a significant role in the realtime data centric communications that will be required to deliver the vision above.

    The shift described draws many parallels with vision processing within realtime robotics applications – such as autonomous vehicles. So it’s encouraging that the ROS community are beginning to consider DDS as communication architecture on which to build upon.

    http://roscon.ros.org/2014/wp-content/uploads/2014/07/ROSCON-2014-Next-Generation-of-ROS-on-top-of-DDS.pdf

    The consumerization of realtime spatial capture, with devices such as Google’s Project Tango, will drive software innovation at a rate that the traditional hardware based TV broadcast industry will find it difficult to react to.

    https://www.google.com/atap/project-tango/

    Like

Submit a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s