nab special 019€¦ · to their advantage to reach a much wider audience and extend their services...

29
JOURNAL COVERING TV, FILM & ELECTRONIC MEDIA WWW.FKT-ONLINE.DE NAB SPECIAL 019 Image: ISE/Thomas Krackl The most prestigious technical trade magazine for broadcast, film and media technology

Upload: others

Post on 24-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

JOURNAL COVERING TV, FILM & ELECTRONIC MEDIA

WWW.FKT-ONLINE.DE

NAB SPECIAL 019

Imag

e: IS

E/Th

omas

Kra

ckl

The most prestigious technical trade magazine for broadcast, film and media technology

ISE_BEILAGE_FKT2019_1-2.indd 1 29.01.19 09:27

Page 2: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

MA

GA

ZIN

Januar/Februar 2019 FKTMagazin _Produktion und Post

The article investigates the challenges and technological breakthroughs in subtitling and why it plays such a critical part in the broadcast industry.

A s subtitles provide so much value to the industry, and not limited to just the hearing-impaired or as

translation devices for foreign films, why is it that so many broadcasters and OTT providers are missing a trick by not taking up this obvious revenue-boosting oppor-tunity in front of them? They could be using subtitles to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base.

In a world that’s increasingly inter-connected on every lev-el from work to entertainment, subtitles or “timed-text services” have become an indispensable and invaluable commodity. Following recent government legislation enforcing broadcasters to deliver subtitling services to linear TV,

OTT, and online channels, the pressure’s on to migrate to time-efficient, cost-effective timed-text technologies.

The digital age of livestreaming, YouTube and other OTT platforms is placing increasing demand for immedi-ate timed-text services to be delivered with high levels of speed and accuracy, combined with cultural and lingual competence to ensure clear communication gets con-veyed.

Recent research confirms that many people now view video footage without sound, whether on phones, laptops, iPads, or other screens. The audience viewing it this way is increasing at a rapid rate, outnumbering traditional television viewers. This trend is creating an even greater demand for subtitling especially for younger generations where subtitles can pow-erfully serve to bridge gaps in communities.

Subtitling has been a major factor in accelerating and generating the sharing of ideas across the globe at a pace never quite seen before. It can be compared to the irrevers-ible impact early writing and the printing revolution had on shaping events. During the 1970’s, subtitling technology went through diversification and improvements in both preparation and emission. Technology is now beginning to have a positive impact in transforming traditional television subtitling for the better.

Subtitling: An Opportunity for Broadcasters Embracing Timed-Text Services

ensures Nothing’s Lost in Translation

All

ima

ges

: PB

TE

U

o Der Artikel untersuchtdie Herausforderungen und technologischen Entwicklun-gen bei der Untertitelung und warum diese eine so wichtige Rolle im Broadcast spielt.

ALEXANDER STOYANOV

is a Subtitling Professional and Managing Partner of PBTEU

ø www.pbteu.com

MAGAZIN_Stoyanov.indd 34 30.01.19 09:23

Page 3: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

FKT Januar/Februar 2019 Magazin _Produktion und Post

MA

GA

ZIN

With many subtitling platforms around, it’s crucial solutions meet specific needs within budget. Subtitling is evolving at a rapid rate and quickly adapted to HD and other formats. There are also ongoing requirements for improved subtitle presentation, as well as the introduc-tion of file-based production workflows, including the inescapable convergence with web and online data.

Subtitles are effectively timed-text metadata tran-scripts with metatools. They complement many environ-ments by transmitting information, descriptions, trans-lations and even emotion. A major benefit of subtitling ensures that the original material is still accessible to viewers and provides a more authentic experience than dubbing or lip-synching would.

Language transfer methods using subtitles have become much easier to read too, with many channels now using subtitling for all foreign material. Rolling text performs a useful function in news broadcasts making it easier to follow with sound often muted in public places. Reliable and accurate timed-text used online can also help power searches, target advertising, increase search traffic, page views, search rank and engagement and therefore increase awareness and attract new business. The advantages are practically limitless.

Subtitles not only play an instrumental role in broadcasting but can be a crucial life-changing factor in many other sectors too. They can be used for improving speech recognition and literacy, aiding communication, providing language learning support, help with hearing impairment, live remote captioning and much more.

It’s also vital that people with impairments can fully engage with television for social, cultural, and family in-clusion. About 40% of UK viewers watch TV on demand, over the internet for example, although less than 30% of people with hearing difficulties. More than two thirds of on-demand TV providers do not provide any ‘access services’ – either subtitles, or audio description. Broad-casters clearly need to keep up.

Social media penetration is an influential driver for the industry and the most “unchartered territory” regarding timed-text services. It’s a brave new world of possibilities and will greatly define whole new segments in producti-on and methods of consumption.

Live events form another niche where timed-text services show great potential. In the future it will grow to a standalone industry due to the particularities and specific challenges it represents. An emerging market for which we already have planned ahead for.

Quality is key, and broadcasters need to dramatically improve the accessibility of catch-up and on-demand services. The consensus is that quality of subtitles generally needs to improve overall, and pre-recorded programmes should have pre-recorded subtitles. The live technology causes delays and inaccuracies. Once viewers get a little more educated about what quality timed-text servicing really is, it will make a difference.

Progress has been made in theatres, concerts, churches, conferences, festivals, exhibitions, and univer-sities, in reaching live audiences through loop technol-ogy and subtitling. Theatres and home cinema are great consumers of subtitling, with 3D, VR, AR becoming more popular. London’s National Theatre’s experimenting with “floating subtitles” by testing mixed-reality glasses. With

MAGAZIN_Stoyanov.indd 35 30.01.19 09:23

Page 4: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

Januar/Februar 2019 FKT Magazin _Produktion und Post

MA

GA

ZIN

operas mainly presented in romance languages, subti-tling has made them more accessible. Our latest project with Vienna’s State Opera House brought fans from several continents together to enjoy the show in their own language.

Respecting linguistic particularities is essential and being local is important for quality localization. The battlefield where localization and AI meet is also a topic arising on the agenda. With Brexit on the horizon, attitudes of cultural exchange might alter within the European context. How audiences approach television viewing and if expectations change, remains to be seen. Research shows 50% of viewers in Europe still prefer subtitles in foreign programmes and films.

In my experience, the main challenge concerning the adoption of subtitling advancements, is not technologi-cal or cultural, but in the mindset of today’s broadcasters. Most broadcasters still live with the idea that subtitles are an element of content that “magically appears.” Many broadcasters need to get informed about the intricate painstaking process behind the scenes. Dedicated work on lengthy and demanding processes that include edit-ing, quality control procedures and endless deliveries of accurate timed-text services.

On the post-production side, most editing software contains very basic subtitling functions not to mention how fundamentally inefficient it is for an editor to take on the subtitling role as well. It’s ideal to have a stan-dalone subtitling system that specializes in proficiently completing the process and is a great time saver.

Like everything in broadcasting, unforgiving tight deadlines, multiple versioning, and last-minute correc-tions also govern timed-texts. This requires intuitive, au-tomated, and fast handling of complex assignments. For example, scene tracking or “on the fly” format conver-sions, are just a couple of features we introduced lately to help our customers face those types of challenges.

There appears to be widespread ignorance among many broadcasters concerning the various technical formats and standards. Many professionals are missing potential opportunities that subtitling can bring. This is the first hurdle that needs to be overcome. Once broad-casters start to understand the benefits that subtitling in all its forms can bring to their audience, and perceive it is an asset that adds value to their service, then a new chapter will open for timed-text services in the industry.

A unified product environment that can generate as many correctional tools as possible in one single plat-

form is our exciting offering. This enables subtitle artists, ranging from enthusiasts to professionals, to accomplish tasks quickly and efficiently.

Subtitling software should be considered by broad-casters as a wider investment as it doesn’t have to be restricted to subtitling functions alone. Many of the technological breakthroughs in subtitling are down to innovative modules that that now support universal sub-title format transcoding and real time live content.

Subtitling software can also be multi-purpose and used to secure systems. Hybrid solutions that combine comprehensive business processing and management systems with subtitling software brings the best of both worlds. They can perform all text related tasks such as translating, spotting, transcoding, and QC, both live and automated, while also being able to manage complex processes simultaneously. These layers work together with Cloud and desktop applications.

Expedient technologies get defined by the people that use them. Proactive developers incorporate custom-er-driven development and understand how fundamen-tal it is that software is fit for integration in collaborative systems within broadcast workflows. It should support all media from local files, streaming, custom-protect-ed content for multipurpose text services, graphics for television, cinema, web, along with customisable UIs and more.

Broadcasters have a uniquely responsible position to communicate “en masse”. Embracing subtitling technol-ogy, can expand their influence for good to improve our world.

At IBC 2018, PBT EU launched its new Timed-Text hybrid platform NEXT-TT powered by Profuz Digital, providing users greater freedom of choice in how they work with desktop or cloud applications.

The new web-based hybrid platform solution consists of the subtitling software application SubtitleNEXT working together with LAPIS (Limitless Advanced Powerful and Intelligent System). Combining these two separate components results in one powerful unified platform. Both components can also be used separately. The combination has resulted in efficient capabilities that now enable much faster and even more reliable high quality timed-text management, dubbing and localization toolsets that equip translators, AV professionals, and creative freelancers to work at a much more effective level.

PBT EU also presented the hybrid platform to the attendees of The Languages and The Media 2018 Con-ference in Berlin, which hosted over 375 participants from 40 countries. This year’s event marked the 12th international conference on language transfer in AV, designed to examine crucial challenges and the way in which cutting-edge technologies are changing how AV media is delivered globally and how it’s consumed across languages.

In addition, subtitlers are welcome to visit the NEXTclub, a growing community sharing tips and tricks, where members can contribute to forums and have ac-cess to industry insights. More information at https://subtitlenext.com/club/ ø

MAGAZIN_Stoyanov.indd 36 30.01.19 09:23

Page 5: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

MA

GA

ZIN

FKT Januar/Februar 2019 Magazin _ Systemintegration

The present article outlines how the Network Device Interface (NDI) SDK adds Interactivity to live VR video footage produced for Hermitage Museum in St. Petersburg.

F ocal Point VR believe that VR offers an entirely new way of experiencing the real world, which is why they

are passionate about creating the technology necessary to realise the dream of people being able to put on a VR headset and then be ‘teleported’ to another place. Con-sequently, Focal Point VR is a leader in the field of live stream VR video and provides solutions to live stream VR content in sport, music, art, education and culture including The Gadget Show and the Champions Tennis at the Royal Albert Hall. Its UHD broadcast quality VR streaming platform called Ubiety, delivers the highest quality VR live streams to all platforms. The Ubiety software platform supports everything from delivering an HD 360° stream to YouTube, Facebook and Periscope through to multi viewpoint VR event coverage delivered to customisable VR apps and html5 players.

Ultra low latency streamingIn April 2018, Room One, an AR, VR and AI platform provider, approached Focal Point VR to provide a 360° video camera rig, live stitching and streaming solution for a 5G trial zone installation at the Hermitage Museum in St. Petersburg, Russia. The application was a future technology demonstrator that mixes live stream immer-sive VR video with an interactive haptic setup controlling a remote robot arm. Focal Point VR’s standard solution met 90% of the client’s goals but needed a low latency streaming solution suitable for the interactive environ-ment. As a result, the team at Focal Point VR started looking for a robust alternative to RTSP for its ultra low latency live VR video solution.

Adding NDI to the pipelineAfter looking at a number of protocols including We-bRTC, the programming team found the Network Device Interface (NDI) SDK easy to implement with initial

testing showing NDI to be stable and worked well at the ultra high resolutions the team used. NDI is a royalty free software standard that enables video compatible products to communicate, deliver and receive broadcast quality vid-eo in a high quality, low latency manner that is frame-ac-curate and suitable for switching in a live production environment. By incorporating NDI into its VR video live production workflow, Focal Point VR provided its live stream processor (the FP-A6 model) and customised NDI play-back using Oculus Rift headsets. Running over a GigE network, the camera lens to headset display latency was less than 200ms enabling real time interaction with the remotely controlled robot.

Technologies of TomorrowMore recently, Focal Point VR has been approached by a number of clients looking for a low latency, high reliability live stream to a 360° dome providing a shared VR experi-ence. In order to meet this demand, the programming team have developed a custom 8K cylindrical 360° video solu-tion, which pairs with Focal Point VR’s proprietary packing technology with NDI running over the internet to deliver the fully immersive experience.

Jonathan Newth, CEO at Focal Point VR, said, “Integrat-ing NDI into our 360° workflow was a very simple task and it has allowed us to deliver a higher quality, more flexible solution to our clients. We are also talking to NewTek about how to integrate NDI more completely into our multi cam-era VR workflow, replacing traditional fibre SDI with a more flexible network backbone and we are looking forward to continuing our work with the NewTek team.”

ø ww.newtek.com

One Day at the Museum

Bild

: New

Tek

o Der Artikel beschreibt, wiedas Network Device Interface (NDI) SDK für die Eremitage in Sankt Petersburg produzier-ten Live-VR-Videoaufnahmen Interaktivität verleiht.

MAGAZIN_NewTek.indd 31 29.01.19 13:25

Page 6: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

FKT Januar/Februar 2019 Magazin _ Technologien und Systeme

MA

GA

ZIN

Back in computing time, implementing any kind of functionality in software was to be avoided at all costs, such was the performance penalty compared with implementing the functionality in hardware. But times have moved on. We now have more computing power – processor perfor-mance, memory size and speed, ASICs, FPGAs and GPUs – than we know what to do with. That‘s brought software back to the fore – system performance is no longer constrained by the inade-quacies of the underlying hardware – because it brings numerous advantages. But: the impor-tance of hardware should not be underestimated. This article will review how the best systems solutions leverage both the flexibility of software and the power of hardware.

In a world where the rate of change is not only relent-less, but accelerating – whether that’s in technologies,

markets or customer requirements – software holds the key when it comes to responding to those changes. That’s no less true for the broadcast industry.

Historically, ours was an industry that relied on proprietary hardware, optimised for specific applica-tions. There can be little doubt that, for a long time, that paradigm worked well. No, those solutions weren’t cheap, and customers found themselves locked in to one or more vendors – but in an environment that was, broadly, stable, that didn’t really matter: the investment could be amortised over several years.

But: over the past few years, there have been two changes of huge significance. The first, of course, is that the broadcast industry has undergone – continues to undergo – a period of huge upheaval, driven by changing consumer content consumption patterns and ubiquitous high speed connectivity.

Unrelenting riseThe second has been in technology. The advent of IP has been transformative, and will be increasingly so – and it has been accompanied by the unrelenting rise in the power of general purpose computing. So-called ‘COTS’ – commercial off-the-shelf – hardware platforms arenow capable of delivering hitherto undreamt of levels of performance, at prices that are driven by the growing commoditisation of technology.

The parallel emergence of a rapidly changing broad-cast landscape and new levels of hardware capability is, to say the least, serendipitous. The industry needs to position itself for an extended period of tumultuous change – and the technology exists to enable that to happen relatively straightforwardly.

It has one significant implication. The broadcast industry of the future will be software-based, from the lowest hardware level to the highest application level.

At the highest level, almost any functionality you will need will be delivered by software – delivered via industry standard servers, platforms, interfaces and interconnects. That’s been an emerging theme of IBC in recent times. It has long been a mantra that companies

buy solutions, not computers – and now, the nature of those solutions is changing forever.

We’ve seen the rise of so-called ‘microservices’ – compact, single-function software components tuned to perform a specific media function. These create a ‘pick and mix’ approach that allows customers to choose only the functionalities they need – but the microservices are engineered to work seamlessly together, in whatever combination.

What’s less widely appreciated, though, is the grow-ing role of software at the lowest levels in the computing hierarchy. Take network probes, for example, which are at the heart of our business.

Not all software is as visible as, for instance, our Instrument View GUI that enables many of our probe users to interact simply and intuitively in real time – from wherever they are in the world – with the network data our probes provide. Much of our software development is invisible to the naked eye.

The Future: It’s Whatever You Want It To Be

o In den Anfängen des Computerzeitalters sollte die Imple-mentierung jeglicher Funktionalität in Software um jeden Preis vermieden werden, so die Leistungseinbuße im Vergleich zur Implementierung der Funktionalität in Hardware. Aber diese Zeiten sind vorbei. Heute steht mehr Rechenleistung denn je zur Verfügung. Damit rückt die Software wieder in den Vorder-grund – die Systemleistung wird nicht mehr durch die Unzu-länglichkeiten der zugrunde liegenden Hardware eingeschränkt – denn sie bringt zahlreiche Vorteile mit sich. Aber: Die Be-deutung der Hardware sollte nicht unterschätzt werden. Der Artikel wird zeigen, wie die besten Systemlösungen sowohl die Flexibilität der Software als auch die Leistungsfähigkeit der Hardware nutzen.

Bild

: Sim

en K

. Fro

ata

d

SIMEN K. FROSTAD

Chairman, Bridge Technologies

ø www.bridgetech.tv

MAGAZIN_Frostad_Technologien und Systeme.indd 29 30.01.19 09:18

Page 7: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

Januar/Februar 2019 FKTMagazin _ Technologien und Systeme

MA

GA

ZIN

The key is to abstract software from the underlying hardware level

Not always comfortable

Although we’re perhaps perceived as a hardware compa-ny, the truth is that we are primarily a software company – and always have been. That’s not always a comfortableplace to be, not least because there is a perception that software is inherently unstable.

Windows, for example, is often derided for its alleged instability. From that point of view, it’s compared un-favourably with Apple’s Mac OS X. But here’s the thing. Apple has absolute control over the hardware environ-ment in which OS X runs. Microsoft, on the other hand, has almost none. While OS X is a fantastic achievement, it could be argued that Windows is an even greater achievement because it runs so well on so many different platforms. Software, in general, only fails when it relies on hardware to do something – and the hardware doesn’t do it.

The key – for Bridge as much as for Microsoft – is to abstract software from the underlying hardware level to the maximum extent possi-ble – to make it as hardware-agnostic as possible - not least because that delivers the portability that enables a solution to be simply migrated from one platform to another with enormous benefits in terms of lower cost, easier upgrade and improved scalability, for example. That’s a huge challenge because, historically, we have relied on hardware to provide key hooks – such as timing – that underpin the performance and functionality of thesoftware.

Among the most daunting of those challenges is the challenge of maintaining absolute accuracy. Our network probes have to be incredibly precise in order to deliver meaningful information in an environment in which, for example, a standard HD ST2022-6 SDI packet stream will deliver 270 packets every millisecond: that’s one packet every 3.5 microseconds. We’ve now managed to move ourselves to a position where our software enables us to achieve what we once relied on hardware to do.

Strong foundationsFrom that point of view, the end application can be likened to a building: without strong foundations, it will be prone to failure. That’s why, at Bridge, we’ve spent a huge amount of time crafting each individual underlying ‘brick’ of code – to deliver the absolute reliability, repe-atability and stability that our customers rightly expect from us. It’s an endeavour that’s not for the faint of heart – but we’ve spent 14 years doing it and, in all modesty,it’s something we’ve become very good at.

And, as we move forward, we meticulously revisit all those legacy blocks of code to find out how they can be improved. We’ve had significant success in that endeavour, achieving significant performance increases as we find ways of doing things better. That initiative has been of enormous benefit to our customers, because we’ve been able to provide regular performance upgrades to them – and because we’ve ensured that our code is independent of the hardware, they can pretty much all take advantage – whether they’re VB120, VB220, VB330 or VB440 users. Software is unique in its ability to provide such pain-free, affordable enhancements to an existing installation.

It’s also the key enabler for some extremely exciting new network technologies. The hardware exists to sustain switching speeds that are orders of magnitude beyond where we are today: it just needs the appropriate software to be developed.

There are already great examples of the advantages that software can bring. Take Remote PHY, for example, where software has ena-bled complexity to be centralised, minimising the cost of deploying

large networks. Or: what about virtual radio, where a similar software-based approach has transformed the viability and economics of the medium?

Numerous advantagesThe fact is that, back in computing time, implementing any kind of functionality in software was to be avoided at all costs, such was the performance penalty compared with implementing the functionality in hardware. But times have moved on. We now have more computing power - proces-sor performance, memory size and speed, ASICs, FPGAs and GPUs - than we know what to do with. That‘s brought software back to the fore: system performance is no longer constrained by the inadequacies of the underlying hard-ware, and software brings numerous advantages in terms of flexibility, upgradability and scalability that have become essential to how we deal with a future that, if not uncertain or unknown, has certainly become harder to predict.

If there is a small cloud on the horizon, it’s that we can be very conservative. That’s hardly surprising: we’ve built an enormously successful industry based on technologies and platforms that have worked well for us and that we trust to deliver. The good news is that software, written as it should be, is no less trustworthy. Few deny that the world is chang-ing, and we need to change with it. Whether that means embracing IP, or looking forward to a future in which almost anything we desire can be delivered quickly and effectively by software – it’s good to know that we have the technolo-gies we need in order to continue to succeed. ø

Ima

ge:

Bri

dg

e Te

chn

olo

gie

s

MAGAZIN_Frostad_Technologien und Systeme.indd 30 30.01.19 09:18

Page 8: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

Januar/Februar 2019 FKT

MA

GA

ZIN

Magazin Tech Across America

T he difference between a successful AV production and an embarrassing event filled with technical

errors often comes down to the crew communication links (Comms) used to keep everyone on the same page. The margin of error is small and the consequenc-es of a lost audio or video signal or missed cue can have a significant impact on a project’s bottom line.

Perhaps that’s why intercoms have taken on an ever more important role in live AV events. Intercoms create the integral links between production crews and directors and the technology to maintain those links is evolving rapidly.

For many productions these days, including live television, corporate events and even dramatic stage productions, the Internet Protocol (IP) is transforming the intercoms market, adding flexibility, scalability and mobility for users. Sophisticated belt packs, I/O matrix systems and are now paired with consumer cell phones and audio networking standards like AES67 and Dante to give production crews the ability to integrate a wide variety of communications tools on a common network. This also makes them easy to use and deploy.

In tandem with IP, the development of wireless intercom systems has in-creased mobility for production crews. However, the impact of the now com-pleted broadcast TV spectrum auctions in the U.S. has forced intercom companies to develop “frequency agile” solutions that can automatically search and find an unused fre-quency to work with. This allows the new generation of Intercom systems to work in environments with signif-icant RF interference from other production companies as well as from high-traffic consumer cell phone use.

Therefore, users of this technology should be aware of interference issues related to this migration to new spectrum bands, especially as an increasing amount of wireless communication systems are deployed at live events. The key most experts say is to plan carefully and understand the environment in which you are operating.

From a manufacturing standpoint, the use of IP-en-abled technology has led to the introduction of smaller yet more flexible equipment that can be configured in a myriad of ways for both fixed venues and temporary spaces. In this way, system resources can be shared among different parts of a production—or they can be used to support multiple studios located in the same building or anywhere in the world. An IP-enabled intercom system can also be employed to route audio (Comms) signals as required (something not possible in ten analog world). Basically, equipment vendors are now integrating a lot of different communications

(voice, data, etc.) into the same intercom system, thanks to IP.

However, the loss of traditional intercom and mi-crophone spectrum due to the auction and subsequent channel repack has placed a burden on product devel-opment. The recently completed Phase 1 of the U.S. TV station repack has affected a wide array of industries —from broadcast to corporate and even live stage produc-tions — and has introduced a few wrinkles as to what’s available frequency-wise. In addition, if your company works internationally, you should be aware that different countries have different rules that can negatively affect spectrum availability.

[In North America, 900MHz is the most popular band used while 2.4MHz works well there as well as most every place on the planet—without the need for the end-user to have a license.]

Traditional intercom systems have heretofore been matrix based, where routing and mixing signals would

typically be sent back to a central location. With an audio-over-IP infrastructure, users are able

to eliminate the matrix and distribute that routing and mixing across all the devic-es that are part of the intercom system. Embracing IP also brings the advantage of a significantly reduced amount of cabling and interconnections required for

an IP based intercom system. Essentially it just requires Ethernet cabling and switches

to set up an entire network. Devices can then be plugged in anywhere on the network, and do not

have to be routed back to a central location. And then there’s scalability. With an IP network, the

user is not limited by the amount of I/O that was required in a Matrix system. Instead, the limitations are mostly based around available bandwidth, but once that is in place, an unlimited number of audio channels, belt packs and POPs (points of presence) supported by Ethernet cabling and switches can be leveraged.

Perhaps the best part is that as the Comms industry continues to embrace IP — enabling the use of manu-facturer-supplied headsets alongside consumer smart phones—users will see more and more compatibility among third-party devices. That means existing hard-ware can be used in tandem with new software-enabled devices in a seamless way. There’s also the ability to connect remote locations together in a much more simple and cost-effective way.

Indeed, the popularity of cell phones has generated increased demand for wireless intercoms infrastructures. Using a highly familiar, untethered device just makes sense.

So the key is for manufacturers of Intercom systems to develop wired and wireless solutions that support the AES

IP Comms Are Transforming AV Productions And Keeping Crews On The Same Page

IP is transforming the Intercom

market

MICHAEL GROTTICELLI

is an experienced editor and regular contributor to FKT’s Tech Across America column.

Imag

e: M

. Gro

ttic

elli

11-21.indd 12 30.01.19 09:09

Page 9: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

FKT Januar/Februar 2019 M

AG

AZ

INMagazin International

67 and Dante protocols so that customers can use standard local area networks at venues or in their home production studios. These new wireless intercom products must also operate in the VHF band, which is undesirable for consumer communication equipment (which is what the spectrum auction winners like AT&T and T-Mobile want to use it for). They can also move from the FM band to AM to make their products more spectrally efficient.

[It’s been said that you could put 200 belt packs and 30 base stations in the same UHF footprint as one four-drop FM system. Five years ago the engineering technology wasn’t even available to do it.]

Those are just some of the benefits that moving to IP have brought to the world of Comms. And by all accounts, that sounds like a good thing. o

BNJ FM Visualizes Its Live Shows

n BNJ already has a portal with access to video content(BNJ.tv), and now wants to offer to watch the programs live and the videos afterwards.

StudioTalk will enable the radio station to get a completevideo production infrastructure within its actual studios, ensure live production of the daily shows and distribute thestreams to the radio website as well as its social networks. It is BCE’s all-in-one solution to produce shows, managecontent, add special effects, control sets and broadcastprograms. Trough an intuitive Graphic User Interface, atouchscreen and a dedicated remote, the customer cancontrol the cameras, the feeds and the studio sets. It em-barks a content management system as well and on the flytranscoding for immediate delivery to the viewers.

BNJ FM broadcasts three radio programs for the coun-ties of Neuchâtel, the Jura and the Bernese Jura in Switzer-land. They report about proximity information, concernsand interests of the Swiss Jura ARC. The general-interest,pop-rock music radio reception is done via FM, DAB +, web and via mobile applications.ø www.bce.lu

n SPANISH DIGITAL TV subscription provider Movistar+ and media and broadcast services provider TelefonicaServicios Audiovisuales, both part of Telefónica, have rolled out Net Insight’s remote production solution to cover theproduction of Spanish Basketball League (Liga Endesa) and to enrich their EuroLeague and EuroCup productions withpersonalized feeds.

The setup is based on Net Insight’s remote productionsolution with Nimbra MSRs and supports up to six cameras with two return feeds including specialized data transport,such as camera control and intercomms, over 1Gbps links

connecting the arenas with the main production facilities.The full package also includes the backup solution based onthe Nimbra VA platform supporting up to two cameras with one return feed and capacity to reroute all the data require-ments. Nimbra VA and internet contribution is now mature enough for remote production, with backup being a first step to put it into the production chain.

The order was won together with Net Insight’s Spanishpartner MoMe in the third quarter and MoMe has providedall integration and implementation services.ø www.netinsight.net

TF1 Opts For Axon’s Cerebrum Software

n AXON Digital Design’s Cerebrum Master Control Soft-ware control and monitoring system offered the featuresthat TF1 needed to manage its audio/video processing, multiviewer and tally/UMD management.

By acting as the nerve centre for all routing, Cerebrumsimplifies multi device monitoring and control onto one easy-to-use interface. It supports several of devices from different manufacturers – including routers, productionswitchers, servers, receiver decoders, multiviewers and waveform monitors – using either SNMP (Simple Network Management Protocol) or third party protocols.

TF1 operate five television channels in France as well asseveral special interest pay-TV channels and their digitaloffshoots. They will expand the system in early 2019 so that it can control more devices.ø www.axon.fr

Remote Production Solution For Spanish Basketball League

11-21.indd 13 30.01.19 09:09

Page 10: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

Dezember 2018 FKT

MA

GA

ZIN

Magazin _Tech Across America

B y now it’s clear that software is more than capable of handing all types of audio and video production;

long-simmering debate settled. Be it in the studio down the hall or at a live sports event 3,000 miles away, virtualizing common production and distribution tasks (video switching, graphics insertion, ad insertion, etc.) makes economical sense. Therefore I’m declaring 2018 as “the year of virtualization and the Service Oriented Architecture (SOA)”.

This was the year when the video industry finally woke up to what the IT world has known for decades: how to move large amounts of data reliably and quickly. The video industry just had to get the timing right, and it did.

The year started with a lot of experimentation and trials of IT-centric technology strategies that extensi-vely leveraged software to produce and distribute such major live events as the Olympics and the World Cup. Both are world-class telecasts watched by millions and more than ever before both events relied on software to get the signals into consumers living rooms. Both events also proved that software versions of heretofore hard-ware-reliant devices and systems were up to the task.

Among the many advances this year in live production technology and workflows, by far the most signifi-cant is the emergence of the virtualized remote production network. Some call them “At-Home” or REMI (remote-integration model) productions, where most of the physical technology is not on site at the sporting event to be televised but at a remote facility. Camera signals from the event are sent back to this facility where they are intermixed with graphics and commercial advertising (some served in specific regions directed at individual viewers (via an IP address) and then sent on to consumers’ homes or mobile devices.

This would not be possible at such large scale without the use of a dedicated suite of software, a bit of interconnectivity via secured and unsecured networks (the internet), a single operator to monitor disparate feeds, and a variety of software-enabled orchestration and control systems.

Most software architecture implementations used thus far have included the use of a SOA software design, allowing virtualized media functions to be dynamically connected between one another for live remote production workflows. Micro servers (really, shared-memory multiprocessors, or SDMPs) and their virtualized functions can be strung together within a single server or between multiple racks using the IP protocol to connect them all. In this way live video can

be reliably distributed to a central (main) broadcast center and on too many platforms simultaneously, without intro-ducing delay.

The benefits are clear. Deploying a software-based workflow—instead of traditional hardware-only or hybrid architectures—enables broadcasters to turn on or off different functions as needed. This allows broadcasters to experiment with new types of tech-driven programing without the burden of a large equipment budget.

As an example (and there are many others), a company called Aperi markets a software-defined platform called V-Stack, which includes an FPGA-powered software platform that, it says, provides more compute power than CPU- or GPU-based processing. Optimized for live pro-duction, they say it provides a much faster and more agile remote production with lower latency. An Aperi remote production network can be deployed on either generic FPGA-powered servers or Aperi’s dedicated edge servers.

With the V-Stack at its core, the platform enables the immediate start and stop of broadcast functions

through apps that are accessed via tiered licenses. The platform is also based on con-

tainer-based technology (with automatic discovery and registration proven in the data center software world), removing the need for manual processes and adminis-tration or field engineers. At this year’s

IBC Show, Aperi staged a demonstration of its platform, showing how it can reduce

seconds of latency to just milliseconds. In 2014, ESPN opened its Digital Center 2

(DC2) in Bristol, Connecticut. DC2 is a $125 million, 190,000-square foot broadcast facility with a soft-ware-intensive infrastructure that can handle 60,000 simultaneous signals and 46 Tbps data throughput. While the project required a massive feat of engineering and significant costs, ESPN found long-term benefits from the reliability, flexibility, determinism, power, and simplicity of the software systems they deployed.

Among a number of benefits, the architecture as de-signed includes an internal Broadcast Audio Monitoring (BAM) service that can listen to 32 streams on each panel. In addition, it can keep track of all the devices talking to the network. As a result, BAM can monitor 16,000 streams at once to check status, analyze the type of stream, define who can connect to what, and reach out to connect if streams stop. The software defines the availab-le streams for the hardware, while each node stays aware of the connections as defined by the software. This makes it easy to reestablish a connection if a signal is dropped.

Businesses from manufacturing to health, insurance to telecommunications, have adopted the SOA model. The Broadcast industry is catching on fast.

The adoption of SOA leads to a more agile organization

Service Oriented Architectures Are The Foundation Of Flexible Video Production

BAM can monitor

16,000 streams at

once

MICHAEL GROTTICELLI

is an experienced editor and regular contributor to FKT’s Tech Across America column.

Imag

e: M

. Gro

ttic

elli

Page 11: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

FKT Dezember 2018

MA

GA

ZIN

Magazin_International

1080p50 HDR Technology at Next Gen ATP Finals

Imag

e: 2

018

Pete

r Sta

ples

that can adapt to change, can optimize processes to control costs, and can provide management with better visibility in order to track their business model’s success. That’s becau-se in a SOA, interfaces are defined at the business level, not technical. So swapping out a service like transcoding or ac-counts for example, can be implemented without impacting the products or people providing the existing services. From this stems the business agility.

Perhaps the biggest clue that software architectures are here to stay is found in the hiring of new staff. All new applicants to ESPN and any other major media organization must have an extensive IT background and a close familia-rity with software. That’s because scalable, agile, format friendly software-centric architectures are the key to it all. o

ATP Media, the broadcast arm of the ATP World Tour produced and distributed a 1080p50 HDR test feed at

the recent Next Gen ATP Finals in Milan in November.The tournament is the ATP Tour’s showcase for innova-

tion across all aspects of the men’s professional game with rule changes, line calling without humans, and player and coach data services whilst the match is in progress.

Carried out in partnership with Gearhouse Broadcast, the test covered the live and non-live workflows on site, marking the first time HDR content has been natively in-corporated into replays and edit workflows on a live sports outside broadcast, as the parties stated.

This latest test follows an initial 1080p50 HDR produc-tion trial during the Nitto ATP Finals in 2017, which put 4K and 1080p50 HDR side-by-side in a live tennis environ-ment. The results of that test suggested to ATP Media that it was 1080p50 HDR rather than 4K HDR that would most efficiently deliver the clear benefits of HDR to its broad-casters and to viewers at home. This follow up test was an important next step in delivering a 1080p50 HDR test feed to broadcasters with a view to gathering their feedback for future planning.

The test was also a next step in incorporating HDR and SDR content across all onsite production, including the non-live content available on the EVS network, ENG camera footage, SDR archive and SDR content from other sources. As host broadcasters, ATP Media produce a news service, premium content and bespoke social media content on site for broadcasters managing HDR and SDR in the post-pro-duction workflow.

Overall, the test further demonstrated HDR technology’s ability to significantly increase the picture quality of tennis matches for fans at home. In addition, while ATP Media has no current plans to distribute content in native 4K, 4K cameras were used on-site to capture live match action as high resolution acquisition is very much part of the work-flow to produce high quality 1080p HDR for Broadcasters to distribute as part of a 4K HDR service.ø www.atpmedia.tvø www.gearhousebroadcast.com

[email protected]

MadeinGermany

4KDIE NEUEGENERATION

BNCpro BNCmicro DIN 1.0/2.3

&DAMAR HAGEN

P170064_Damar&Hagen_Banner_171122.indd 1 22.11.17 14:46

Page 12: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

FKT November 2018

MA

GA

ZIN

Magazin_Tech Across America

P erhaps one of the biggest worries any cinemato-grapher has as they look into the future is, “Will my

work eventually look dated?” It’s a legitimate fear considering how much empha-

sis the film and TV industries put on resolution, but in the end, it’s more about the artistic value of the work, rather than whether or not it was shot in 4K, 8K, or whatever is coming down the pike.

Many cinematographers agree that no matter how many pixels there are, the artistic intent should always be respected. Content will always be king and make up for a lot of anomalies in your footage. It’s the subject matter keeps footage from becoming dated.

Proper image composition also has a lot to do with a project’s longevity and that can only be captured with the right camera and image sensor. Therefore, choosing the optimum camera is critical to the type of image you want to create for your project.

So, what is the optimal camera?A camera that allows you to shoot in 4K is great, but

you might get better results with an HD camera. Of course resolution is a big creative factor to consider before choosing a camera, but it’s not the only one. There are other, potentially more important features you should consider: like latitude, dy-namic range, form factor, and sensor size. Even if you’ve got a camera that shoots beautiful 4K images, it won’t do you much good if, say, there isn’t enough light to capture it in.

Thankfully, many of the newer 4K cameras that have come out in the last couple of years take these important specifications into consideration and help ensure a successful outcome. However, it’s still imperative to measure the value of a camera based on the needs of your project, rather than its own list of features.

Indeed, there are so many video camera options available, it’s easy to become overwhelmed by the choices. Instead of picking a specific camera or brand, it’s best to first decide what type of camera you need. A small form factor camera might be more approp-riate to a scene where handheld work is preferred. A large sensor might capture what you are after. Careful pre-production planning will help identify the right one for you.

Many manufacturers now offer 4K video in the palm of your hand. With built-in image stabilization, users can easily capture decent footage. You can also edit video on your phone and immediately upload your videos online.

You might snicker, but today’s cell phone came-ras are no joke. There is already an entire Swiss news channel called Léman Bleu with content shot on iPho-

nes and viewers are watching without complaint. Video journalists there report that while the footage quality is often sufficient, it can be difficult to capture quality audio, especially if your subject is too far away from the built-in microphone. That’s why reporters have been equipped with a separate audio recorder.

Moving up the ladder, most “professional“ video cameras have high-end sensors and use interchangeable lenses, with the ability to shoot both HD and 4K footage. Using these camera assumes a basic knowledge of color grading, setting audio levels, and an understanding of video codecs (for file delivery and storage reasons).

These cameras are not simple to set up either. You will need the proper lenses for the camera’s mount, a monitor or viewfinder, battery packs, as well as any other necessary support gear. Yet, if set up properly, profes-sional video cameras will capture stunning images. You may also have high-quality audio built in depending on the model, and all of these cameras feature professional inputs and outputs for external gear.

If you are looking at shooting high-end commer-cial work or feature films, a true cinema camera

is going to be your best option. It’s also best to have an entire camera crew dedicated

to working with these cameras, as their operation and the workflow involved with handling the large amounts of data is more complex.

Managing the increased data consumption of 4K footage is another criti-

cal factor for any 4K workflow. File are stored differently when using different cameras. If you’re

used to shooting in HD, 720 or 1080, media management might be something you do sporadically throughout the day as needed. However, 4K is a whole other animal and requires a dedicated plan for dealing with the deluge of data you’ll be capturing. This is an important factor to keep in mind if you’re an indie filmmaker who typically works on small crews. This is especially important if you’ve never considered hiring a DIT to manage all the data coming in.

Storage is also a limitation to using an iPhone. If you are shooting long videos or a ton a footage, you need plenty of space on your phone to store the video files. If you don’t plan on shooting much and are only interested in making quick short videos, this may be your best ca-mera for filming. If you’d like more production value, then you’ll need to look into one of the following options.

Managing the increased data consumption of 4K foo-tage is indeed critical factor for any 4K workflow. If you’re used to shooting in HD, 720 or 1080, media management might be something you do sporadically throughout the day as needed. However, 4K is a whole other animal and requires a dedicated plan for dealing with the deluge of data you’ll be capturing. This is an important factor

Image Composition Starts With The Camera

Image composition is

a very subjective and highly personal

thing

MICHAEL GROTTICELLI

is an experienced editor and regular contributor to FKT’s Tech Across America column.

Imag

e: M

. Gro

ttic

elli

Page 13: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

November 2018 FKT

MA

GA

ZIN

Magazin_International

Catalonia: Canal Blau, La Xarxa and TV3 at Castellers Events

Imag

e: ja

rcos

a /

isto

ckph

oto.

com

to keep in mind if you’re an indie filmmaker who typically works on small crews. This is especially important if you’ve never considered hiring a DIT to manage all the data co-ming in.

The technology is advancing fast, that while today we’re all talking about the challenges of 4K data management, shooting in low light, and choosing between a camera with high resolution or a small form factor, a while range of new

challenges will be identified and resolved next year and going forward. The ley is to stay on top of the latest new on the camera or cameras you might prefer and do your homework. Image composition starts with the camera in your hand.

Image composition is a very subjective and highly per-sonal thing. How you frame shots, with the right camera for the job, is only limited by your imagination. o

n ALONG THE SEASON from April to September, regional TV companies belonging to la Xarxa group filmed the Ca-talonian cultural events almost every weekend. The footage was to be shared with fellow regional broadcasters La Xarxa and TV3.

The three broadcast partners needed a solution that guaranteed the low latency transport of the action for live broadcast in news and current affairs programming. It was important that the solution could, with redundancy, trans-port high quality HD video over an FTTH network.

A castell is a human tower built traditionally at festivals throughout Catalonia and the region of Valencia. Castells were first documented in 1712 and originated within Valls, Catalonia, near the city of Tarragona. Interest in castells grew in the 1960s and 1970s and in 2010, castells were declared a masterpiece of human heritage by UNESCO. In 2015 a festival hosted 99 groups of castellers, at which the

tallest human tower was erected and safely dismantled, having 10 levels with four people in each level.

In the event happening in Sant Joan, 24th June, Canal Blau cameramen captured footage at Valls, Catalonia, Spain. This was contributed as RAW HD video using a TVU One mobile unit in combination with the TVU Grid solution over a FTTH network to La Xarxa HQ in Maternitat-Barcelona. In turn, La Xarxa used TVU Grid to transport this footage to TV3 and back to Canal Blau studio. A second unit in Valls edited footage at a mobile unit, which was also contributed to La Xarxa for Broadcasting purposes through all their different channels.

TVU One delivers high-definition picture quality with bandwidth efficiency all within a flexible, compact and rug-ged hardware design. Through weighing just 2 kilograms, TVU One ensures that outside broadcast cameramen are able to get the shots unencumbered by wires or heavy and bulky equipment. The solution enabled Canal Blau to cont-ribute live video with the versatility of a small, lightweight, IP-based high-definition video field transmitter without sacrificing performance, features or picture quality. From the middle of crowded Valls square where castells were build, they were able to rely on TVU One to transmit at >9 Mbps with 1 second delay.

TVU One has the one button operation used by the vendor for portable live video streaming transmitters. The system maximised ease of operation for Canal Blau came-ramen through requiring no in-field configuration and a boot-up time of less than 20 seconds.

The IP based video switching, routing and distribution solutions enable broadcasters to share a TVU One live video transmission with any other TVU Grid enabled station, ope-rations center or physical location. The three broadcasters involved in the Valls Castellers project utilized the fully-in-tegrated TVU One and TVU Grid solution to share both RAW HD video and a fully edited programme. ø www.tvunetworks.com

Page 14: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

FKT November 2018 Magazin _ Technologien & Systeme

MA

GA

ZIN

S tandards converters solve these problems by chan-ging the format and frame rate of the content to suit

the required display specification. However, as tech-nology advances and new formats and new production tools become available, more complications arise in the conversion workflow.

Whilst high dynamic range (HDR) and wide color gamut (WGC) increase the artistic choices available to content producers, converting that content for audien-ces watching on standard dynamic range (SDR)-only or

narrower-color-gamut displays can be problematic. Fortuna-tely, a number of standards conversion manufacturers are starting to include tools sup-porting HDR and WCG in their products, but these tools must be used with care, as there are many pitfalls.

This article discusses conversion of content maste-red with HDR and WGC, highlights some of the possible problems, and suggests solutions.

IntroductionProducers of UHDTV content have many new tools at their disposal to enhance programme-making, including high dynamic range (HDR) and wide color gamut (WCG). High dynamic range enables greater contrast ratios, such that fine variations in blacks can be accommodated

Management of High Dynamic Range and Wide Color Gamut in Standards Conversion Applications

o Dieser Artikel behan-delt die Konvertierung von Inhalten, die mit HDR und WGC gemastert wurden, zeigt einige der möglichen Pro bleme auf und schlägt Lösungen vor.

PAOLA HOBSON

Managing Director, InSync Technology

ø www.insynch.tv

Since the earliest days of TV broadcasting, global viewers have wanted to watch programs from other countries — everything from live breaking news and international sporting events to enter-tainment and cultural programming. However, simple international program exchange has never been possible because of huge differences in TV standards around the world. Similarly, transfer-ring movies to formats suitable for home-TV viewing inherently requires both format and frame rate conversion for audiences in all regions.

Qu

elle

© M

aks

ym Y

emel

yan

ov

(fo

tolia

)

Page 15: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

November 2018 FKTMagazin _ Technologien & Systeme

MA

GA

ZIN

at the same time as very bright whites.Wide color gamut, as defined in ITU-R BT.2020,

extends the color space definition such that a larger range of colors can be represented in the scene. Figure 1 illustrates this concept.

Figure 1: Illustration of color spaces BT.709 and BT.2020

Multiple HDR standards have been defined, and the ITU report BT.2408-1 titled “Operational practices in HDR television production”[1] presents in-depth guidan-ce on the use of high dynamic range in television produc-tion using the Perceptual Quantization (PQ) and Hybrid Log-Gamma (HLG) methods specified in Recommen-dation ITU-R BT.2100[2]. Within the PQ standard, the producer will select a peak brightness level depending on

the expected viewing conditions — information that must be conveyed within the programme’s metadata. HLG is described by the BBC (its originators) as a “scene-refer-red signal,” which means that it’s possible to obtain the same artistic effect on any target display, independent of that display’s inherent brightness, and no additional metadata are needed.

Note that producers of UHD content don’t necessarily use HDR. It is equally common to find UHD content mas-tered at standard dynamic range (SDR), and, while most existing HD content will use SDR, producers of some newer HD content might have taken advantage of HDR production tools. Project information from the BBC[3] provides more detail on the opportunities HDR offers.

WCG is specifically defined in BT.2020 as applicable to UHD content. However, UHD content may equally be produced for the BT.709 color space, creating significant opportunity for confusion — and therefore downstream quality problems — when processing the content.

Conversion ApplicationsAs broadcasters transition to UHD services, there are many situations in which SD — or, more commonly, HD — content needs to be integrated into a UHD produc-tion. In these cases, the SD or HD content might need frame rate conversion as well as resolution upconversion. Similarly, UHD content is often downconverted to HD for distribution on existing delivery networks, as illustrated in the simplified HD workflow of Figure 2.

If the UHD content was produced in HDR with BT.2020 color, but the HD workflow has been configured for SDR with BT.709 color, then clearly the Figure 2 ex-ample needs to be augmented with additional processing, as illustrated in Figure 3.

UHD frame rate conversion might also require HDR/WCG processing. For example, the incoming programme might have been mastered according to the PQ standard, but the output is required to be HLG. Therefore, in that case, the workflow needs a PQ-to-HLG transformation in

Figure 2: Example HD workflow

Figure 3: Example HD workflow with HDR/WCG processing

Figure 4: Example UHD frame rate conversion with HDR-HDR transform

Page 16: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

FKT November 2018 Magazin _ Technologien & Systeme

MA

GA

ZIN

addition to the frame rate conversion, as illustrated in the example of Figure 4.

When converting content as in the examples above, it is expected that the converter will examine the incoming metadata and make a decision about the required trans-forms. Such behaviour is normal in common conversion workflows, in which a user would ordinarily leave the converter in “automatic mode” on the input side. In that case, the converter detects the incoming signal’s format and frame rate and identifies important information from the metadata, such as colorimetry and dynamic range.

A typical user of standards conversion products would expect to have to set only the output parameters, i.e., the format and frame rate to which the output signalshould conform. While doing that would typically work for most current converters and conversion applications, as will be shown below, it might not be the most appro-priate course of action for content mastered using HDR and WCG.

Possible Conversion Problems■ (A) COLOR SPACE CONVERSIONIf the incoming video metadata are reliable and contain the color space descriptor, then allowing the converter to operate in an automatic mode on the input side will be successful. In that case, the converter interprets the incoming video according to the correct color space. Automatic input mode is still an option even if the input video metadata are unreliable or do not contain a color space descriptor; in that instance, the assumption is that SD content adheres to BT.601, HD follows BT.709, and UHD follows BT.2020.

Sometimes the user knows that the input video metadata relating to color space are erroneous or that the content does not follow the usual assumptions as in the “automatic” case. It is also possible that content being edited from multiple sources were mastered at different colorimetry standards. For these situations, it is recom-mended to manually force the converter to select the color space. Caution is advised to avoid quality problems, as will be illustrated in the following examples.

EXAMPLE 1: BT.2020 VIDEO ASSUMED TO BE BT.709In this example, it is considered that UHD content was produced according to BT.2020 but either has incorrect metadata stating BT.709, or the user has overridden the correct metadata and instructed the converter to treat the video as BT.709.

If the user expects to view the output on a BT.2020- compliant monitor, as illustrated in Figure 5(i),

then the result of a conversion will be that colors appear desaturated on the monitor used to view the output video. This result is most clear on the red post, which is correct-ly presented in Figure 5(ii).

EXAMPLE 2: BT.709 VIDEO ASSUMED TO BE BT.2020In this example, it is considered that UHD content was produced according to BT.709 but either has incorrect metadata stating BT.2020, or the user has overridden the correct metadata and instructed the converter to treat the video as BT.2020.

As illustrated in Figure 6(i), the result of a conversion will be that colors will appear oversaturated on the moni-tor used to view the output video. Figure 6(ii) shows the correct color presentation.

Figure 6: Illustration of BT.709 content treated as BT.2020

■ (B) HIGH DYNAMIC RANGEAs discussed above, there are multiple standards for HDR, and the best results are obtained when one stan-dard is used throughout capture, conversion, and display. However, using a single standard is not always possible, such as when a programme is shot in UHD HDR using PQ, but the user is viewing it on an HD SDR TV. Similar-ly, the programme might be shot in UHD using HLG but viewed on a PQ monitor.

If the source content is SDR but is incorrectly treated as HDR within the conversion process, then the resultant video will suffer unpleasant artefacts that make the video look over-enhanced. Figure 7(i) shows an example in which the detail in the lighter picture areas is lost in an unwanted “bloom.”

Figure 7: Illustration of SDR material treated as HDR

(i) SDR processed as HDR (ii) SDR processed as SDR

EXAMPLE 2: HDR PROCESSED AS SDRIf the content is HDR but is treated as SDR during standards conversion, then the resultant video will tend to reduce the dynamic range, making the video appear somewhat flat. An example appears in Figure 8(i), where the HDR content processed as SDR appears much less bright than when correctly converted.

Figure 5: Illustration of BT.2020 content treated as BT.709

(i) BT.2020 video treated as BT.709

(ii) BT.2020 video treated correctly

(i) BT.709 video treated as BT.2020 (ii) BT.709 video treated correctly

Page 17: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

November 2018 FKT Magazin _ Technologien & Systeme

MA

GA

ZIN

Figure 8: Illustration of HDR processed as SDR

(i) HDR processed as SDR (ii) HDR correctly processed

Achieving the Best Results in Frame Rate and Format Con-version

Color space mappingWhen converting video material between HD and UHD, or when converting frame rates between different UHD standards, it is important both to choose a converter with the appropriate HDR and WCG tools, and to set the requi-red parameters carefully to match the incoming content and the expected final viewing conditions.

In the following examples, the FrameFormer software standards converter from InSync Technology Ltd is used to illustrate the proper choices required for correct con-version. FrameFormer is available as a component within the Zenium Workflow Designer that underlies Imagine Communications’ SelenioFlex File transcoding solution.

The colorimetry settings are fairly straightforward. Figure 9 shows the controls available in the FrameFormer for SelenioFlex File window, which enable the converter to set explicit colorimetry for the incoming content.

If the incoming video metadata are reliable and cont-ain the color space descriptor, then the recommendation is to select “Automatic.” This selection ensures that the incoming video data are interpreted according to the cor-rect color space. If the input video metadata are unreliab-le or do not contain a color space descriptor, then it is still

acceptable to use automatic mode. In that case, Frame-Former assumes that SD content adheres to BT.601, HD follows BT.709, and UHD follows BT.2020.

If the user knows that the input video metadata relating to color space are erroneous or that the content does not follow the usual assumptions as in the “auto-matic” case, then it is recommended to select the color space from one of the three options shown in Figure 9. However, it is essential to select the correct colorimetry to avoid quality problems, as discussed above.

On the output side, if any of the predefined SD, HD, or UHD output standards are chosen, then the assumptions as in the “automatic” case may be made. For example, an HD output will be assumed to require conformance to BT.709, or UHD to BT.2020.

If a different output color space is required, such as UHD content as BT.709, then it is necessary to select a “Custom” output from the output format drop-down menu. From there it’s possible to select BT.709 or BT.2020.

HDR to HDRSettings for HDR management are more complex. Me-tadata describing HDR parameters might be missing or incomplete, depending on the ability of upstream equip-ment to insert the required information. Furthermore, if the HDR format is PQ, then it is important to identify to the converter the peak or maximum brightness. Peak or maximum brightness (also called grading level) is an es-sential parameter in PQ HDR and is set by the program-me maker related to the expected viewing monitor. This parameter is only relevant for PQ content and is specified in nits (cd/m2) at four levels: 1,000, 2,000, 4,000, and 10,000.

If HDR metadata are missing, incomplete, or erro-neous, then it is necessary to give the converter explicit instructions in identifying whether the content is SDR or HDR, what type of HDR, and the peak brightness in the case of PQ, as illustrated in Figure 10.

Figure 10: HDR selection for the input content

For HDR-to-HDR mappings, similar menus are available under the output controls to select the chosen format (see Figure 11).

A further choice is available in the output menu for HDR conversions in which the maximum supported output level is less than the source level. For these cases, clipping or limiting is applied (see Figure 12).

• Hard clipping means that high brightness levelsnot supported in the selected output format are hard-clipped to the maximum supported brightness level for that output format.

Figure 9: Example controls allowing color processing in standards conversion

Page 18: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

FKT November 2018 Magazin _ Technologien & Systeme

MA

GA

ZIN

Figure 11: HDR selection for the output content

• Soft clipping means that brightness levels close tothe maximum supported levels in the selected outputformat are progressively attenuated to avoid an ab-rupt cut-off. A soft clip is an irreversible process.

HDR to SDRFor material that was mastered using HDR but is to be displayed on an SDR monitor, FrameFormer provides tools for HDR-to-SDR mapping. On the output menus, SDR should be selected (Figure 11). Doing so ensures that the FrameFormer software standards converter will carry out the correct mapping.

SDR to HDRFor content mastered in SDR but to be integrated into an HDR programme, FrameFormer offers SDR-to-HDR mapping. SDR should be selected from the Transfer Characteristic drop-down menu under the Input menu (Figure 10), and the required HDR format should be selected from the Transfer Characteristic drop-down menu under the Output menu (Figure 11).

FrameFormer offers a further menu option under High Dynamic Range, which is to allow the user to ex-pand the highlights when mapping from SDR to HDR, as shown in Figure 13.

Figure 13: Expand highlights

Highlight expansion is explained in detail by the BBC in an April 2017 article[4]. In simple terms, the highlight expansion inverts the camera knee, which is a nonlinea-rity commonly applied during the camera-transfer func-tion when the SDR content is acquired. This nonlinearity is applied because TV systems tend to operate within a smaller dynamic range than optical characteristics would allow. The camera knee compresses highlights

at the top of the dynamic range in order to be within the range of the TV system.

Because an HDR display can accommodate a much greater dynamic range than SDR, the BBC Highlights Expansion method allows all luminance values above a chosen breakpoint to be multiplied by a predefined para-meter, which increases their value. This process expands the highlights back to more “normal” values with respect to midtones and shadows. As recommended by the BBC, FrameFormer uses a breakpoint of 80 percent of signal level and a multiplier of 2.0.

Using FrameFormer in SelenioFlex FileSelenioFlex File’s file-to-file media processing solutions seamlessly blend transcoding and workflow capabilities, supporting a comprehensive range of formats with su-perior quality for applications from post production and archive to multiscreen distribution. Built on the Imagine Communications Zenium pure microservices platform — an agile software engine that enables customizable foundational architectures — SelenioFlex File delivers a dynamic system management environment, allowing ready access to a catalog of features, functionalities, and licenses that are required at run time. This decision-ba-sed media processing workflow facilitates intelligent automation and exceptional scalability, and is easily ma-naged from a single, intuitive, and consistent interface.

The FrameFormer software standards converter component for SelenioFlex File is a plug-in that enables content conversion from any frame rate and format to any other frame rate and format. Typical conversions will include UHD to HD downconversion and HD to UHD upconversion, as discussed in the applications above. Fi-gure 14 shows the FrameFormer component in a typical workflow.

Figure 14: FrameFormer within a Zenium workflow

FrameFormer is available as a plug-in for popu-lar editing software such as Final Cut Pro X or Adobe Premiere Pro on platforms such as Imagine Communi-cations’ SelenioFlex File, or as a stand-alone application that can also be customised for integration into a bespo-ke workflow. ø www.insync.tv

References[1] Report BT.2408-1, “Operational practices in HDR television

production,” April 2018, ITU.[2] Recommendation BT.2100-1, “Image parameter values for high

dynamic range television for use in production and internatio-nal programme exchange,” June 2017, ITU.

[3] BBC R&D Projects website: https://www.bbc.co.uk/rd/projects/high-dynamic-range

[4] Thomson S. “Conversion of conventional video for display on High Dynamic Range televisions,” SMPTE Motion Imaging Jour-nal, April 2017, pp 23 – 28.

Figure 12: Hard/soft clip

Page 19: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

FKT November 2018

MA

GA

ZIN

Magazin _ Produktion & Post

Can you give an introduction to Lumeni?Lumeni began as a film-based graphics production company in 1979. At that time it specialised in crea-ting high-end graphics and effects using computerised motion-control animation cameras along with a process known as ‘back-lit, multi-pass, motion-control photo-graphy’. I came onboard in 1995 when Lumeni’s existing camera graphics customers started requesting digital title finishing, primarily using Adobe After Effects. The company was a pioneer in developing workflows and colour science to accurately record digital cinema titles to film.

As titles became more layered and interactive with theatrical background plates, it became clear to our team that we needed a powerhouse DI system with serious compositing tools. Because of the nature of our work, we are often called to sit in on DI sessions at almost all the premiere DI facilities in Hollywood. Once we saw the Ba-selight FOUR and EIGHT workstations at the former Post Logic, we knew this system was going to be the future for our work at Lumeni.

How did you start your career? I graduated from RIT with a BS in Professional Photo-graphy. After a few false starts, I ended up working for Carabiner in New York, which at that time was one of the world’s biggest producers of large-scale corporate mee-tings and events. Carabiner was an analogue photogra-phy heaven that created slideshow presentations using up to fifteen synchronised projectors, and really pushed the envelope in terms of what could be done in front of large audiences with analogue photo technology. Dozens of artists worked 24-hour shifts creating Ektachrome title slides and business graphics for their shows.

The next logical step for me was to make the jump to Hollywood theatrical projects, and moving to Lumeni in 1995 allowed me to do just that.

How do you keep up in our evolving film industry and keep current with emerging techno-logies, workflows, etc.? Because most of Lumeni’s work involves small sections of very large theatrical projects, my job gives me a bird’s eye view of the workflows that major tentpole movies around town are using. We have found that it makes the most sense to provide graphics and digital opticals in the same file format and colour space as the feature film and its associated VFX shots. For every major project, we receive the same workflow documents that the VFX ven-dors receive. I have learned not to be shy about asking the feature film’s colour scientists and VFX supervisors to help explain new concepts.

Can you explain the finishing process of 3D trailers?Recently studios have been asking that the 2D and 3D deliverables have no major creative differences. For the last few big projects, we have changed our longtime strategy of building 2D first. We now build 3D first and work out all of the depth problems and 3D-related art issues. Then we clean up and centre the left eye for our 2D delivery.

Since trailers are assembled from small sections ta-ken from the feature film, it’s important that we know and understand the 3D depth budgets of the larger project. It is vital that we talk with the stereo supervisor of the show to get a sense of their tastes. Elements that don’t cut elegantly into the larger theatrical project always become re-dos.

Working with “Hollywood’s Big Brain”

DAVE BASTA

Senior Colourist Lumeni Productions

ø www.lumeni.com

o David Basta has spent 20+ years at Lume-ni serving major motion picture studios and TV networks. Most recently, Dave’s passion led to the development of Lumeni’s flagship service, Dynamic Interactive Review.

Alle

Bild

er: L

um

eni P

rod

uct

ion

s

Dave Basta arbeitet seit mehr als 20 Jahre bei Lumeni und hat während dieser Zeit großen Filmstudios und Fernsehsendern zugearbeitet. In jüngster Zeit führte ihn seine Leidenschaft zur Entwicklung eines neuen Service: Dynamic Interactive Review.

Page 20: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

November 2018 FKT

MA

GA

ZIN

Magazin _ Produktion & Post

What’s different about 3D stereo work and what are the challenges?One of the challenges working in 3D stereo is the amount of content that must be generated, which is at least double what is needed for its 2D counterpart. Several of our recent projects specified 4K stereo deliverables. Generating both left and right eye elements at that high resolution requires a lot of After Effects or NUKE rend-ering, and a significant amount of layer pre-caching in Baselight as well.

How does a grading session work typically?As a colourist at Lumeni, I work in concert with our in-house producer, Rob Ball. The legal requirements and client co-ordination issues on trailer work are enormous. Often our ‘plan A’ for a project hits a dead end and we need to come up with a ‘plan B’ or ‘plan C’ on the fly. Since a studio-wide trailer campaign often includes a half dozen or more studio decision makers, changes must go through multiple reviews and approvals to keep everyone in the loop. Rob’s experience producing over 30 years enables him to defuse issues before they become problems.

Rob and I work as a team to come up with a colour grading and compositing solution that we not only think solves the client’s needs, but that we are also proud of personally. This usually gives us a really healthy starting point to address client notes and make everybody happy.

How do you and Lumeni’s VFX team work together? For VFX and stereo-heavy projects at Lumeni, I always work closely with our Compositing Supervisor, Sean Amlaner. In addition to his 17 years of compositing experience, he is also a fully fledged stereographer. At the very beginning of every project, we extensively dis-cuss project strategies, layering, colour science, etc. All of our art staff computers are tightly networked to our two Baselight theatres. Artwork continuously flows in both directions. NUKE and After Effects test plates and heroes move to Baselight, and sections of the Baselight timeline that could be more effectively handled with NUKE’s tool-set are moved to the art department. Hand-offs usually take just a few minutes.

How would you define your personal style of gra-ding?Conscious.

How do the compositing tools help you meet tight deadlines?

The compositing tools in Baselight are at the heart of our approach to digital finishing, even when emergency ‘surgery’ is required for projects with tight schedules and limited budgets. Baselight’s power and flexibility enables our art department to interface seamlessly with our DI theatres to produce efficient, cost-effective results. This dynamic workflow means that our clients can make interactive changes to graphics, effects, and colour while reviewing their projects in our theatres. We call this service ‘Dynamic Interactive Review’. I also like to describe it as ‘Fire and Rescue’ for graphics that must be

fixed now. These kinds of projects really demonstrate our strengths at Lumeni. With very few exceptions, we have every programme and plug-in used for motion graphics installed and ready to go at a moment’s notice, and we have a full-time art staff in-house that knows how to use them.

A few days ago, we were commissioned to use our Dynamic Interactive Review workflow to improve a studio intro logo that was supposed to look like jewel-lery-quality gold but didn’t, greatly disappointing the studio clients who only had three hours left to complete the project. Two hours after we received the original After Effects project, we delivered a hero take to the editor in NYC, and the studio clients loved it. Our art department output the original After Effects project layer by layer in a high-bit-depth colour space, and handed the layers off to Baselight for re-compositing. Baselight’s advan-ced colour science tools enabled us to rescue highlight information that was burned to a crisp in the original take. Using Baselight’s delicate colour curves, we nudged a very presentable, even elegant, gold texture out of the layers. Best of all, the final price was very reasonable, and the client was delighted.

What do your clients like most about what you can offer them with the system?Their favourite thing is something they don’t even know the name of: Truelight Colour Spaces. Our clients are amazed that by using the built-in colour space tools in Baselight, we can exactly match the colour workflow of almost every film production in town. When our work is dropped into the bigger project at Technicolor or EFILM, the colours are exactly what everybody expected.

How would you summarise your relationship with FilmLight?I think of FilmLight Hollywood as the ‘big brain’. Without their input it would be impossible to keep up on all of the technical innovations that are continually happening in the industry. I would say that 99% of the time when I ask the team at the Hollywood office about a new technical workflow, they already have encountered the issue in the weeks before and are well versed in explaining it to non-engineers.

What is your inspiration? I’m a simple guy. I like to make beautiful things on large screens, and to bring home the bacon to support my family.

What’s next for Lumeni? I’ll let our CEO, Gilbert Yablon answer that one…GILBERT: “When we started Lumeni in 1979, our goal was to make clients feel like heroes, and we’ve never lost sight of that. Every project is important to us, and we strive to achieve outcomes we can all be proud of. In today’s post-production environment, the demand for quality has never been higher, yet the level of difficulty for achieving quality is constantly increasing. We believe tools like Ba-selight, our experienced staff and innovations like Dyna-mic Interactive Review help our clients sleep at night and position us perfectly for future challenges filmmakers are likely to encounter.” ø

Based on source material provided by: www.filmlight.ltd.uk

Page 21: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

November 2018 FKT Forschung & Entwicklung _ Beiträge

FO

RS

CH

UN

G

Across the media and entertainment technology industry, there is a perception that SMPTE Stan-dards are heavyweight and take a long time to cre-ate. The fact is that the standards-creation process within SMPTE is quite efficient and lightweight — until they get to the point of requiring 60 smart people to come together and agree on the minutiae of a standards document. That can take some time.

o Der Beitrag beschreibt Überlegungen und Ar­beits schritte zur Entwicklung und Implementierung neuer SMPTE Standards.

B ack when most of the industry’s media processes were implemented with hardware-based systems, the tried-

and-tested approach of specifying in detail, upfront, prior to publishing the standard was appropriate. Before manufactu-rers committed to getting a chip fabricated, they had to be confident that every little detail of the standard on which it was based was just right. Today, we live in a different world. Organisations build systems and products using software and agile processes with continuous incremental releases of new code. Within this dynamic and much faster-evolving en-vironment, the conventional SMPTE standards process is un-surprisingly considered by some to be a poor fit or bloated.

Understanding this, the Society took a look to see how it might accelerate and improve the process of providing useful tools to the industry’s technology suppliers.

Creating the SMPTE Technical SpecificationAfter studying the industry and its needs, SMPTE decided to develop a prototype process for the creation of business-dri-ven SMPTE Technical Specifications (TSPs). In doing so, we borrowed the first half of the standards-development pro-cess, right up until the big committee ballot that calls for consensus among, potentially, hundreds of voting committee members.

Rather than put the Committee Draft (CD) document to an internal ballot, as the conventional standards process dictates, the Technology Committee (TC) reaches consensus that the document can be called a TSP and uses the GitHub platform, which serves the world’s largest community of de-velopers, to put it out for public comment. In effect, we move the knowledge, or at least links to it, to the place developers naturally search for that sort of information. At this point, the-re is time for field tests, some proofs of concept, or even ge-neration of some reference code — all of those good things that you do in a modern, iterative software process to iron out the wrinkles and eliminate any bugs. Following this fee-dback period, the Drafting Group addresses, and optionally resolves, the comments from the public. They can then revise the TSP as often as they see fit.

If, at any point, the proponents behind the TSP believe, thanks to feedback from users and the development com-munity, that there is merit in turning the TSP into a SMPTE Standard suitable for a wider audience, they just push the appropriate document(s) into the second half of the stan-dards process to create a very strong and stable standard that will last for a long time.

Enabling Greater and More Diverse InnovationWith TSPs, SMPTE is beginning a new chapter in its work to encourage global interoperability and to foster the emergen-ce of new and stable technologies. Our hope in getting do-cuments out for public search and scrutiny sooner is that the Society and its committees can benefit from getting earlier access to a more considerable amount of user testing and feedback. At the same time, we intend that this new process provide technology suppliers of all sizes with convenient ac-cess to the tools they need to take advantage of and build on SMPTE Standards.

TSPs complement SMPTE Standards by being responsive to and reflective of rapidly evolving industry technologies and workflows. The first such specification was published earlier this year. SMPTE TSP 2121:2018, titled IMF Applica-tion DPP (ProRes), was proposed by the Digital Production Partnership (DPP), and it builds on the existing Interoperable Master Format (IMF) standard developed and published by

AGILE TECHNICAL SPECIFICATIONS FOSTER INNOVATION BY COMPANIES LARGE

AND SMALL BRUCE DEVLIN

Page 22: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

FKT November 2018 Forschung & Entwicklung _ Beiträge

FO

RS

CH

UN

G

SMPTE, which was initially designed for streamlining the dis-tribution of premium feature film content.

The published TSP represents a significant milestone in industry collaboration: The DPP worked with the European Broadcasting Union (EBU), North American Broadcasters Association (NABA), and the IMF User Group to create the specification. The process we’ve developed for specification creation has drawn the interest of technology suppliers, lar-ge and small, and also of trade organisations and even indi-viduals alike. The openness and relative agility of the process is appealing to a wide swath of companies, and the potential benefits outweigh the low cost of proposing and pushing a specification through to completion. The price tag includes guest membership for specification proponents, and agree-ments between SMPTE and several trade associations also offer a simple and economical pathway for proposing a spe-cification.

Looking Ahead to Future OpportunitySMPTE Standards themselves, and now TSPs, are merely tools for solving business problems in our industry. What we at SMPTE want to foster with our new specification process is a healthy community of people who are implementing to those standards and building the tools and services that me-dia companies will use to create and distribute more and better content in better ways.

Thanks to our new TSP process, teams of software de-velopers, no matter where they work, now can have the abi-lity to search and access the knowledge they need to create

a brilliant early-stage implementation. Using a familiar de-veloper platform, they can find and take advantage of refe-rence tools such as test vectors, sample code, XML schemas, metadata, and other helper information essential to making products and services based on SMPTE TSP.

The opportunity opened up by TSPs is remarkable, gi-ven that in our marketplace, we’re seeing new companies developing significant volumes of new content and working with different distribution and broadcast models. Some of these business models are flourishing at the moment, but nobody knows what’s coming next. TSPs give technology suppliers the information and tools they need to respond quickly to industry shifts.

TSPs are just one part of SMPTE’s broader commitment to tackling modern challenges with a modern approach. We welcome broad participation in this work, and we also appreciate comments, both kind and cruel, on our efforts. I am particularly interested in how users would like to see SMPTE Standards and TSP material made more searchable. ø

BRUCE DEVLIN

SMPTE Standards Vice President

ø www.smpte.org

Imag

e: S

MPT

E

Page 23: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

Oktober 2018 FKT

MA

GA

ZIN

Magazin_Tech Across America

C urrent advancements in media asset management (MAM) are focused on repurposing archives and

making the most efficient use of the assets stored on a centralized repository, either locally or in the cloud. Both provide easy and secure access to media files and both can be combined to develop a hybrid solution that works for production teams locally and across the world.

However, success lies not with fancy networked servers or specialized software—though that certainly helps—but in metadata and how it is used. Properly indexed, footage can be found, navigated through and reused more easily. It is still the same content, but it is far more valuable.

The importance of metadata is even more signifi-cant now that video consumption is more and more interactive and takes place simultaneously on TV, various social networks, online services and mobile applications.

Yet, creating proper metadata is not that easy. It requires a lot of manpower, and really expensive. Once a good metadata strategy is in place, then the key is to develop house rules or best practice “profiles” that are consis-tent across all audio, video and data assets. This allows the user to search and find assets quickly and reliably. Without such rules, production can be adversely affected and then producti-vity suffers.

The most successful media repurposing strategies clearly define a variety of retire-ment, repurposing and retention profiles using com-binations of Scheduling Rules and Scheduled Actions. Here’s a list of several different media repurposing policies that have been successfully deployed and have allowed unique, collaborative workflows; where assets are available to all authorized team members with the click of a mouse (or a spoken word).

• Search and Indexing Policies - Keep your stre-aming media discoverable, while adhering to IT and Legal compliance requirements. Use flexible metadata capabilities to tailor governance policies dictating who can search for your media, and how your media is to be available across systems and CDNs, inside and outside the firewall.

• Media “Pruning” Policies – Combined with media transcoding, which generates cross-device and cross-screen renditions (“flavors”) of your media, schedule selective deletion of flavors and/or generation of smal-ler ones, to optimize storage, while still keeping your media online and available to a more limited audience. Pre-determined scheduling rules can be established to trigger actions based on key phrases like “media older than X” and “media not viewed since X.

• Media “Time Capsule” Policies – Retire stale contentnon-destructively. Send off media and metadata to che-aper offline storage or a long-term archive, keeping only a metadata signature. Many systems on the market allow on-demand revival by re-ingesting and gluing the media back to the central storage repository. Interoperability with third-party compliance or archiving systems can be supported via in-transfer transformations of media and metadata. This allows the user to meet proprietary formats and government standards. Often triggered by Scheduling Rules such as “media older than X” and “me-dia not viewed since X”.

• Global Purge Policies – Retire stale content destruc-tively. Schedule deletion of all video files, thumbnails, metadata and related assets of a streaming media object. This can be made to support a grace period by sending email alerts to media owners ahead of deletion. Often triggered by scheduling rules such as “media older than X” and “media not viewed since X”, combined with custom taxonomy rules such as “Content Type is Y.”

Retention Policies – Maintain compliance with le-gal requirements by setting blocking rules such

as “Marked for Legal Hold,” which ensures retention of media despite it reaching its

planned expiration. • Scheduled Metadata Transforma-

tions – Save on streaming costs while keeping your assets online, by schedu-

ling alterations to entitlement rules. This allows the user to either effectively hide

assets from all users but your media reposi-tory admins, or, allows mostly media customers

to perform “stock rotation” by reintroducing older content to new users.

• Combined Policies – All the above can be combinedand chained. For example, users can “prune” retiring as-sets by generating a small footprint flavor, send only this flavor to a “time capsule” and “global purge” all streaming media object’s remaining assets.

Artificial intelligence-powered algorithms that sup-port speech-to-text search and other ways of searching large databases for specific content can now significantly improve the implementation and use of many of these activities. AI is also helping to increase productivity by performing visual searches of recurring themes (people, places and things).

A good asset management strategy takes careful planning and input (feedback) from all of the various departments and individuals involved in content creation and delivery. When everyone’s on board, the team works as one string entity and assets lying dormant for years can be became retrospective programs that generate new revenue. If not, finding that important clip form 1987 might be the hardest thing you do all day, or week, or month... o

A Good MAM Strategy Makes Your (Production) Life Easier

Creating proper

metadata is not that

easy

MICHAEL GROTTICELLI

is an experienced editor and regular contributor to FKT’s Tech Across America column.

Imag

e: M

. Gro

ttic

elli

Page 24: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

FKT Oktober 2018

MA

GA

ZIN

Magazin_International

n M6 GROUP, operator of multiple TV channels in France, will upgrade its ingest, playout and storage infrastructure to a media-over-IP solution from Harmonic. Offering support for the SMPTE ST 2110 suite of standards for professional media delivery over IP networks and SMPTE 2022-7 for seamless pro-tection switching, Harmonic‘s playout and storage solutions will streamline the broadcaster‘s transition to all-IP video delivery.

M6 Group chose Harmonic‘s Spectrum X media servers and MediaGrid shared storage system based on their rock-solid reliability and workflow simplicity. They will use the Spectrum

X media servers for content ingest, live switching with graphics, DVE, 24x7 playout and manual review, with manual control sup-ported through Broadteam‘s Omnium video controller. Harmo-nic‘s MediaGrid shared storage system will support M6 Group‘s real-time editing and playout operations.ø www.harmonicinc.com

M6 Group Transitions to an All-IP Workflow

n SPORT STREAMING service Dazn will be opening a brand-new office in Amsterdam. According to the company, the new office is expected to create jobs for 300 people by 2022 with roles including software engineers, development managers, and scrum masters.

The new development centre will play a crucial role in the global roll-out of Dazn as the home to the R&D and Innovation, Acquisitions and Retention, and Third Party Integration teams.

Headquartered in London, and part of Perform Group, the company will join a growing cluster of tech companies already based in the city such as Netflix, Uber, Amazon, Viacom and Booking.com.

Dazn is a global live and on demand sport streaming service that allows fans to watch their sport, their way. Dazn is already available for sports fans in Germany, Austria, Switzerland, Japan, Canada, Italy and the U.S. The investment in a new develop-

ment centre will enable the company to deliver its plans to be live in 20 markets by 2022.

The company was formed in 2016 and has a global workforce of over 1200, as per the company’s own statement. In the past year alone, Dazn stated to have made 750 new hires in the UK.ø www.dazn.com

Imag

e: D

azn

Imag

e: H

arm

onic

Dazn Opens Development Centre in Amsterdam

It’s a combination that has no competition. We’ve taken the features of our classic ControlCenter series and integrated them into the KVM-over-IP™ matrix to achieve new levels of � exibility.

With the ControlCenter-IP, you can operate even the largest installations since it uses standard IP structures instead of dedicated cabling. It supports all common video signals up to 4K@60Hz, using our own lossless video compression for maximum compatibility.

And of course the ControlCenter-IP o� ers the peerless levels of usability, safety and reliability you would expect from G&D. The most comprehensive and complete KVM product range in the industry just stretched even further.

Introducing the ControlCenter-IP:the outstanding flexibility of KVM-over-IPTM with the renowned functionality and reliability of the G&D matrix.

with the KVM experts.Experience complete peace of mind

It’s a combination that has no competition. We’ve taken the features of our classic ControlCenter series and integrated them into the KVM-over-IP™ matrix to achieve new levels of � exibility.

With the ControlCenter-IP, you can operate even the largest installations since it uses standard IP structures instead of dedicated cabling. It supports

renowned functionality and reliability of the G&D matrix.

IT COULDN‘T BE MORE FLEXIBLE IF IT DID PILATES

www.gdsys.de

CC-IP_Ad_210x138mm_V04 NOFLASH.indd 1 24/09/2018 15:42

Page 25: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

Oktober 2018 FKT

MA

GA

ZIN

Magazin _ Technologien & Systeme

In today’s fast-paced media environments, more new content is being created than production teams can

possibly manage without specialized tools. At the same time, the clock is ticking for digitizing historical content that exists in legacy, analog formats like tape before the original content degrades. It’s critical that all of these assets be logged and tagged so that they can be found easily, but teams have no time to do this essential work.

At the same time, the current generation of media as-set management tools have grown up in an environment where they were starved of metadata. As a result, content teams’ options are limited to pulling technical metadata

from media files or streams, extracting meaning from file and folder names, or manual logging.

Artificial intelligence (AI) is beginning to change how media organizations meet these challenges. A new and emerging breed of AI platforms for media analysis, when paired with leading-edge media asset management (MAM) tools, offers great potential for transforming media workflows and making it easier than ever for operations to access, manage, and archive tremendous volumes of content. Through powerful tools such as speech to text and automatic language translation, AI engines bring new power to the MAM task of logging and tagging content – with the ability to tag assets automati-

New Artificial Intelligence Tools Transforming

Media Asset Management

DAVE CLACK

is CEO of Square Box Systems.

ø www.squarebox.com

o Die Verbindung von erweitertem Media AssetManagement mit neu entstehenden KI-Plattfor-men für die Medienanalyse bietet ein enormes Potenzial für die Transformation von Medi-en-Workflows. Der Artikel beschreibt wie die Integration dieser beiden Technologien Zugriff, Verwaltung und Archivierung von Inhalten ver-einfachen kann.

AI is beginning to change how media organizations tackle MAM challenges

Ima

ges

: S

qu

are

Bo

x S

yste

ms

The marriage of advanced media asset management with emerging AI platforms for media analysis offers powerful potential for transforming media workflows. In this article, we’ll describe how the integration of these two technologies can make it easier for operations to access, manage, and archive content.

Page 26: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

FKT Oktober 2018 Magazin _ Technologien & Systeme

MA

GA

ZIN

cally based on attributes such as people, places, things, and even sentiment.

It sounds almost too good to be true: suddenly you can unlock the potential of all of your content and make it immediately searchable, reusable, and monetisable. At last, you can get some traction on those digitization projects and get a better handle on all of the content in your existing library! But wait - while the potential exists to realize these benefits someday, the truth is that the technology needs to overcome some issues in order to become mainstream.

One area that needs improvement is accuracy. While AI analysis is getting better all the time, particularly with speech-to-text offerings from players such as Google, Microsoft, Amazon, and IBM, fine-tuning is still needed. For instance, the engine might not be able to distinguish between U.K. or American English, and abbreviations and jargon are likely to generate mistakes. The industry is still working on easy methods to train the AI engine to recognize these language variations and correct mista-kes. Also, for image or video analysis, the sophistication of AI tools varies considerably. Some platforms offer only very basic video analysis, meaning the best way to cap-ture metadata for people, places, objects, and sentiments is to make a set of image sequences and analyze them manually.

AI aggregators can help users avoid some of the costs and complexities of setup by making it easier to choose the right AI engine for a specific task. But even so, pi-cking the AI tool that’s best for a given activity is not tri-vial. At the same time, cost structures across the industry are far from transparent, making it difficult to work out the total expense of applying AI to a media library. It’s a multi-step process: first, you have to work out how to get your content into the AI engi-ne, which is often in the cloud. That might involve having to create a video proxy, separate the audio files, create an image sequence, and other steps, and then uploading the content and ma-naging its lifecycle. Should you leave the content on the vendor platform or delete it to save on storage? Is it in the right format for the AI engines to understand? Which AI tool should you run, and is there a separate cost for each style of analysis? There might be different price tiers for different content formats; for instance, 4K assets might cost more. With each vendor having its own price list, it’s pretty difficult to compare apples to apples.

Also, the technology is advancing so quickly that any AI analysis done today may have to be refreshed later, as the tools improve. Managing these refreshed data sets, especially if they have been corrected or updated by a human after the original analysis, adds another layer of complexity. And of course security is a concern, especial-ly if the data is uploaded to cloud providers.

The AI-MAM ConnectionAs these powerful AI technologies continue to mature, strong media asset management capabilities will become increasingly important. On the metadata side, tools that can store, search, and easily correct a huge volume of time-based metadata are crucial. Good metadata and user interface design are vital to keep the system from overloading users with too much information. And on the workflow and automation side, feeding the AI engines with the right data and automating the analysis, while keeping down costs, will separate the true enterprise offerings from the also-rans.

So what might an AI-powered MAM solution look like? One approach is to supercharge the MAM system’s logging, tagging, and search functions through integra-tions with leading AI vendors and aggregators, such as Google, Microsoft, Amazon, and IBM. Integrations with best-of-breed AI platforms and cognitive engines could allow the MAM to leverage advanced AI-based speech recognition and video/image analysis, with the flexibility to be deployed either in the cloud or in hybrid on-premi-ses/cloud environments. Here are a few of the advanced capabilities that could result:• Speech-to-text, to automatically create transcripts and

time-based metadata• Language translation• Place analysis, including identification of buildings and

locations without using GPS-tagged shots• Object and scene detection (e.g. daytime shots or shots

of specific animals)• Sentiment analysis, for finding and retrieving all cont-

ent that expresses a certain emotion or sentiment (e.g.“find me celebrations (in a sports event)”)• Logo detection, to identify

when certain brands ap-pear in shots

• Text recognition, to enabletext to be extracted from characters in video

• People recognition, foridentifying people, including executives and celebrities

The next frontierOf course, these capabilities are just the start. The MAM system can also be a powerful tool to train and improve AI engines; e.g. content manually tagged in the MAM could perhaps be used to identify the executives in a corporation. The MAM could use this manual tagging to train AI engines to do a better job of logging and tagging new content.

The industry is being transformed by AI and the explosion in sometimes low-quality metadata. Only the most powerful, flexible, easy-to-integrate, secure, and scalable MAM platforms are embracing this challenge and will thrive.

In the right hands, AI becomes the key that unlocks the next generation of MAM technologies. ø

While AI analysis is getting better all the time, particularly with speech-to-text

offerings from players such as Google, Microsoft, Amazon, and IBM,

fine-tuning is still needed.

Page 27: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

Oktober 2018 FKT Magazin _ Technologien & Systeme

MA

GA

ZIN

The media content landscape continues to transform at a staggering rate. M&E organizations face increa-

sing challenges to grow audiences, prove the effecti-veness of advertising campaigns, index for quality and compliance, and increase revenue. In terms of asset ma-nagement, the sheer volume of content under manage-ment and the rate at which it is being created can make finding and retrieving content a challenge, even with the best metadata and logging workflows in place. With the explosion of content creation, from user-generated content to multi-version studio releases, AI will have an

increasing role to play in the critical task of discovering relevant content all along the digital supply chain, so that it can then be further monetized.

That’s why M&E vendors are engaging with the likes of IBM Watson, Amazon Web Services, Microsoft Azure, Google and Veritone to find tangible ways to use AI to help drive efficiencies and monetization opportunities across their customers’ operations and content archives. In the M&E sector, early adopters of such technologies are already beginning to reap the benefits of operational efficiencies and revenue generation.

How AI Can Transform Asset Management

Ima

ges

: D. C

an

dle

r, M

. Mo

rper

o In praktisch jeder Branche nimmt das The-ma künstliche Intelligenz (KI) immer breiteren Raum ein und meistert sowohl operative Ver-besserungen als auch Effizienzsteigerungen mit erstaunlicher Geschwindigkeit. Auch in der Medienbranche ist KI auf dem Vormarsch und ist Gesprächsgegenstand in einem weiten Feld von Projekten bis hin zu Produkten. Lag der Fokus anfänglich auf der Optimierung von Workflows und dem verbesserten Auffinden von Material, können viele Unternehmen die Vorteile bereits praktisch nutzen. Der Artikel beschäftigt sich mit der Rolle von KI in einer SaaS-Asset-Manage-ment-Umgebung.

DAVID CANDLER

Senior Director, Customer Solutions, Wazee Digital

ø www.wazeedigital.com

MIKE MORPER

Vice President, Product Marketing, Veritone

ø www.veritone.com

In virtually every industry vertical, artificial intelligence (AI) is claiming a growing stake in the supply chain and creating both operational enhancements and business efficiencies at an amazing rate. In the media and entertainment (M&E) industry, AI is starting to make its way into a wide range of conversations about everything from projects to products. With an initial focus on streamlining workflows and creating enhanced discovery experiences, the benefits are rapidly becoming a reality for many businesses. The article describes the role of AI in an SaaS asset management environment.

Page 28: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

FKT Oktober 2018 Magazin _ Technologien & Systeme

MA

GA

ZIN

At its most basic, AI alongside asset management can help organizations analyze, share, and index their media offerings automatically, which ultimately leads to streamlined workflows and enhanced discovery experi-ences. AI can automatically generate preconfigured and relevant metadata that can enhance advanced searches on vast archives, which in turn can reduce operatio-nal costs and raise the discoverability and usability of valuable media content.

Putting AI into Action in an SaaS Asset Management EnvironmentWazee Digital and artificial intelligence partner Veri-tone have joined forces to give customers a wide range of advanced cognitive engines and applications that seamlessly integrate with customers’ digital media ar-chives. Veritone’s AI technology for automated metadata extraction and analysis is now firmly embedded within Wazee Digital’s cloud-native Core enterprise SaaS asset management platform, aimed at further enhancing the management, discovery, and monetization of valuable content. Core has about 25 million video assets under secure management for diverse organizations such as major studios, governmental departments and agencies, broadcasters, sports federations, news archives, and a wide range of other content owners and rights holders.

Veritone’s aiWARE operating system is unique in that it uses Veritone’s proprietary AI technology to orche-strate the capabilities of the world’s leading AI solu-tion providers in one trusted operating system. Through a direct integration with Core, aiWARE delivers a means for M&E organizations to leverage AI quickly to solve real-world business challenges with ease, speed, and accuracy.Veritone’s extensible AI eco-system of cognitive engines and powerful applications ma-kes it possible for Wazee Digi-tal Core users to enhance their search and exploit every frame of video and every se-cond of audio for objects, faces, brands, text, sentiments, keywords, and more. Users can discover unique insights, dissect and analyze content programmatically and by multivariate search, and monitor media in near-real time. The technical collaboration enables the correlation and transformation of both structured and unstructured data in a seamless manner via AI, at scale.

What does a joint solution with aiWARE mean for broadcasters, production companies, networks, sports organizations, ad agencies, brands, and anyone else who uses Core? It means Core customers have the power to enrich their owned and earned content easily and intelli-gently in ways not possible until now.

Here’s how the combined Wazee Digital Core and Veritone aiWARE solution can help:

Transcription — Transcription engines convert spo-ken words in both audio and video recordings into reada-ble text. They are built and trained to recognize different languages, dialects, and topics. Word-for-word tran-scription has evolved into natural language processing, adding contextual relevance to increase accuracy (which is expected to reach near-perfect levels in the next few years). The Core-aiWARE solution has the potential to

generate transcription from audio in up to 55 languages.Face Recognition — Face recognition engines can

identify and index the presence of individuals in video or still images and can recognize specific individuals based on a library of known faces. While face recogniti-on has been used in security and law enforcement, it is also seeing adoption in hospitality and, in particular, the media and entertainment industry. The Core-aiWARE solution currently employs up to eight cognitive engines to identify the faces of over 10,000 public figures. Of per-haps greater value, custom libraries of faces can easily be created too, making the automatic identification of speci-fic athletes, actors or any other category of individuals quickly achievable.

Object Recognition — Object recognition engines can identify multiple objects within video or still images. The rapid proliferation of specialized libraries continues with datasets that include weapons, tree or insect species and more. Labelled images available for machine learning globally will soon be measured in hundreds of billions. The Core-aiWARE solution has the potential to use more than 25 cognitive engines with more than 1 million label-led images included in the aiWARE platform.

Sentiment — Sentiment engines discern the tone behind a series of words and use it to understand the atti-tudes, opinions, and emotions expressed. Sentiment can be scored based on “polarity” (positive, neutral, negative) or even more complex emotional categorizations and applied to text or audio. Two such cognitive engines are

available in the Core-aiWA-RE solution.

Audio Fingerprinting — An audio fingerprinting engine generates a con-densed digital summary, deterministically gene-rated as a reference clip, that can be used to quickly locate similar items across multiple media files. Audio fingerprinting can help ad agencies and brands vali-

date the delivery and efficacy of ads that appear natively in the program content versus pre-produced ones that run during normal commercial spots. There is one engine today within the Core-aiWARE solution that supports this particular application.

Translation — Translation engines translate written text from one language to another. Translation engines often use sophisticated algorithms to increase the ac-curacy of sentence structure and parts of speech, which can differ widely among languages. Properly identifying named entities, separate from normal text, is another complicating factor. The Core-aiWARE solution has the potential to use more than 10 such engines.

Geolocation — Location engines associate media with geolocation data points and enable search by locati-on, displaying a map view of media file collections or other specialized functionality. Geolocation extends to landmark recognition, utilizing the elements found in a visual “scene” to infer the likely location with accuracy judged on the difference between the prediction and the actual location. There is one such location engine in the Core-aiWARE ecosystem.

OCR — Optical character recognition, also known as text recognition, extracts text from an image, video,

The SaaS Asset Management environment allows users

to discover unique insights, dissect and analyze content

programmatically and by mul-tivariate search, and monitor

media in near real-time.

Page 29: NAB SPECIAL 019€¦ · to their advantage to reach a much wider audience and extend their services to be made more accessible to an even larger customer-base. In a world that’s

Oktober 2018 FKTMagazin _ Technologien & Systeme

MA

GA

ZIN

or an electronically stored information document. This could include license plates and text imagery on TV (such as a stock ticker or “Breaking News” flashed across the screen). The Core-aiWARE solution can use more than 4 engines in 55 languages.

Logo Recognition — Logo recognition automatically identifies specific companies based on their logos or brands in images and video. There are four potential logo recognition engines in the Core-aiWARE solution.

All these engines allow users to build purpose-driven solutions that can help improve workflow efficiencies, optimize ad and sponsorship verification, repurpose content, enhance competitive research, unlock hidden revenue streams, and more. By applying key compo-nents, users can automatically create a searchable set of data along the content timeline, as opposed to manual viewing and logging. And because one API integration between Core and aiWARE opens up access to hundreds of cognitive engines, Core users can try different engines to find the one that best fits the parameters of a given project.

How Does It Work?On the front-end UI, Core users can simply select single or multiple assets, place them into a project bin, and assign the bin to a precon-figured Veritone workflow, which processes predefined AI cognitive engines against the assets. (Fully automated workflows are also availab-le, in which flagged assets are submitted to the same Veritone workflow on ingest). All metadata returned from the AI cognitive engines is displayed along an asset’s video timeline, with different engine results displayed in their own timeline fields. Metadata is also indexed in the Core search engine and can be discovered either by using Core’s advanced search features or by its timeline data search tools, which jumps to moments identified by the engines when selected. In turn, all “in” and “out” timecode points can be added to a project bin and shared with other users. Those segments are also automatically turned into individual sub-clips for further utilization. XML exports of any timeline are available as standard.

It’s simple and transparent for the user, but there’s a lot going on behind the scenes.

Wazee Digital Core is hosted in AWS. Most Core customers use either Amazon S3 or Glacier object stora-ge, which can be provisioned independently or by Wazee Digital. Because Veritone also uses Amazon S3 to store and manage the renditions required for AI interrogation, processing the content is a straightforward endeavor. Core’s orchestration layer manages both automated

and manual workflow processes, which push selected assets into predefined Veritone workflows that have been specified by a preset XML manifest programmed into Core. Core can select specific Veritone cognitive engines against which to run content and receive the metadata back into Core via a REST API. Once in Core, the metada-ta is indexed and fully searchable on an asset or timeline level.

On the Veritone side, once the Core user places assets into the Veritone workflow project bin, aiWARE ingests the media files and employs a curated set of cognitive engines to generate metadata from the supplied assets. The newly indexed media file and associated metadata is programmatically transferred back into Core. From there, users can harness Core’s UI to search for keywords, logos, faces, objects, and more.

An AI Solution That Gets ResultsThe use cases are vast and diverse. In one recent ex-ample, an international media conglomerate — home to premier global television, motion picture, gaming, and other brands — used the Core-aiWARE solution to under-

pin a broadcast compliance workflow. To comply with the U.S. Federal Communica-tions Commission’s Child-ren’s Television Act of 1990, the company is required to identify the talent used in any advertisements that run during children’s educational programs. Wazee Digital and Veritone leveraged automated facial recognition, speech to text, and enriched metadata within Core to identify the talent and provide data back

to the company. As a result, the company can be sure that the ads do not contain talent that is also in the concurrent program.

ConclusionWazee Digital and Veritone have provided a cloud-native enterprise asset management platform that is AI-en-abled. The Core-aiWARE solution can store, transform, search, and deliver digital media content at scale, with the added ability to interrogate collections using more than 200 cognitive AI engines processing over 728,000 cognitive searches every hour (with the number of engi-nes and hours growing every day).

Collaborating with other industry experts can lead to better, more flexible solutions than most vendors could ever build on their own. It’s the best, most efficient way to give customers the problem-solvers they need. This type of collaboration will be the foundation of next-generation supply chains that reduce costs, improve efficiencies, and ultimately make content available to use. ø

Collaborating with other in-dustry experts can lead to

better, more flexible solutions than most vendors could ever

build on their own. It’s the best, most efficient way to give customers the problem-solvers

they need.