Pebble – Cybersecurity collaboration for protecting high-value media

Pebble – Cybersecurity collaboration for protecting high-value media

Neil Maycock, CCO, Pebble

As broadcasters continue their IP transition and take advantage of the commanding and compelling opportunities that cloud systems offer, protecting high-value media must now be a fundamental component of the design, not a hastily appended feature.

Taking a stand-alone application running on a traditional broadcast infrastructure and shifting it into the cloud is just asking for trouble. The chances are that the application, no matter how effective it is at processing video and audio, was probably never built with advanced security in mind. Usernames and passwords are often kept to their default values and even the simplest of penetration tests expose vulnerabilities that would make the average hacker salivate.

Placing a firewall at the point of the internet connection certainly provides a level of protection, but the way we now use IP based systems has changed beyond all recognition meaning the approach of relying on walled gardens is both out of date and exceptionally dangerous. Bring your own devices, international cyber-terrorists, and human error all conspire against protecting high-value media and new approaches are needed to counteract these challenges.

We often think of cybersecurity as the responsibility of the broadcaster, but as hackers continue to hone their skills, security must fundamentally start with the software vendor. There is no point in having the most secure network on the planet if the code running on a broadcaster’s servers has more holes and vulnerabilities than the proverbial sieve. Modern software must be built from the ground-up with security being equally important to the processes and services the application is providing.

Security policies need to be defined before a single line of code is committed and then robustly enforced throughout the whole software design process. Vendors that diligently enforce secure coding principles as part of the design process, including verification using advanced third-party tools, can guarantee their code is highly secure even before it has left their own development lab. Continuous analysis and review should be at the heart of development methodologies so that when a new feature is shipped, security is guaranteed to be at the core of the design.

Modern software designs generally rely on the implementation of third-party libraries to provide generic tasks, such as HTTP message processing and secure access. Although these libraries are often highly secure and well proven, their prolific use in the wider IT community means they often provide a focal point for hackers looking to exploit vulnerabilities. As an example, the security issue found in OpenSSL’s implementation of the Heartbeat extension for the TLS protocol, which was first discovered by Google back in 2014, gave rise to the Heartbleed vulnerability. This exposed SSL/TLS servers into potentially giving away sensitive information to the hacker, including usernames and passwords. Although this was quickly addressed and the necessary security patches issued, it is still a potential source for hackers’ attention today, and if the SSL/TLS access point hasn’t been updated, then a hacker can gain undetected access to the network.

We may think that the Heartbleed exploit is the responsibility of the IT department who installed the SSL server, and this may well be correct. But what if the software service vendor providing the video and audio processing application had used the OpenSSL library in their solution?  Even if the vendors library was the most secure version available at release, a new exploit could leave the broadcaster vulnerable to attack even if they had patched all their own instances. This is where vulnerability management comes into play, and it forms part of the EBU’s R143 Cybersecurity Recommendations. With this, the responsibility for monitoring and identifying security exploits becomes a collaboration between the broadcaster and their partner vendors who are supplying software applications. Vendors who understand and comply with EBU R143 are more likely to quickly and proactively notify broadcasters of a pending issue.

The Common Vulnerability Enumeration (CVE.org) website is an excellent source of information for quickly discovering library vulnerabilities and exploits. Broadcasters and vendors that sign up to their news emails will be made aware of vulnerabilities allowing them to significantly reduce their exposure.

When users access the application through the convenience of a web browser, we can no longer rely on simple password access. Although IT departments have tried to implement security policies by forcing password changes, these have proved to be counterproductive as the users have been shown to just increment a number at the end of their password, thus establishing a pattern and making it more vulnerable to hostile hackers.

Using centralized credential management combined with OAuth 2.0 provides the best of both worlds for user security. A unified login approach means users do not have to keep changing their passwords and can login to the web browser quickly, easily and securely. Also, the centralized management allows system administrators to provide variable logout timing. For example, in an MCR where monitoring is key to its operation, engineers do not want the web browser to be logged out at critical moments. Centralized credential management facilitates smart time limited sessions that meet the operational needs while maintaining secure media.

Granular permissions for servers, storage, and network access also promotes secure systems and encourages the concept of Zero Trust security. This assumes anybody or any process accessing the system is potentially hostile and must be validated against the centralized credential management database.

These new methods of delivering highly secure cloud software are just a few examples of how software design engineers and security experts coordinate their skills to keep high-value media secure in today’s cloud infrastructures. Cybersecurity resilience augments when vendors and broadcasters collaborate and work in partnership.

More Screens – What really matters when choosing your OTT technology partner?

More Screens – What really matters when choosing your OTT technology partner?

Predrag Mandlbaum, CEO, More Screens

As the CEO and co-founder of More Screens, I’m deeply honored to celebrate 25 years in the streaming industry. Over the past quarter-century, I’ve had the unique privilege of witnessing the rise and fall of countless groundbreaking technologies, each arriving with the promise of transforming the way we consume content. For those who have been in the industry as long as I have, you’ll recall the early days of RealNetworks, Flash Media, Windows Media, and 3GPP formats, along with the countless mobile and Smart TV operating systems like Symbian and Netcast. Back then, we at More Screens invested significant resources and capital to integrate these technologies into our offerings, all with the goal of delivering a seamless and reliable experience to our customers.

But where are those once-promising technologies today? Sadly, many have faded into obscurity, replaced by more robust and standardized solutions like HLS, DASH, and HTML5. This evolution underscores a fundamental truth about our industry: technology is transient, and innovation is a constant. It’s a reminder that to remain relevant and competitive, we must continually invent, evolve, and embrace new technologies.

Today, as More Screens continues to develop and provide cutting-edge technology for OTT platforms, we frequently hear industry buzzwords like “industry-leading,” “unique,” or “award-winning.”  These terms are indeed powerful, but they carry even greater weight when we reflect on the journey we’ve all taken and the valuable lessons we’ve learned from the technologies that have come and gone.

So, how do we evaluate the right partner for a new OTT project? Drawing from my extensive experience, here are five key considerations that I believe are essential:

 

  1. Long-term partnership approach:  In my experience, successful OTT projects resemble a marriage between the supplier and the customer. Success is built on deep mutual understanding, compromise, and respect, ensuring that the relationship can thrive and the customer can continue to grow over the long term. At More Screens, we’ve maintained relationships with some of our clients for over a decade, a testament to the value of nurturing and sustaining long-term partnerships that support mutual growth.
  2. Commitment to innovation: The streaming industry is in a constant state of flux, and the willingness to innovate and experiment with new technologies and processes is crucial. By embracing innovation, companies can improve daily operations, enhance scalability, and provide a superior user experience – key factors in maintaining a competitive edge.
  3. Flexibility and responsiveness: A strong technology partner must be adaptable, ready to adjust and upgrade their product based on customer feedback and evolving market demands. Rather than being rigidly bound to their own product roadmaps, they should demonstrate flexibility to ensure the partnership can evolve and meet the ever-changing needs of the market.
  4. Support excellence: For many customers, OTT is their core service, meaning that it requires around-the-clock support to ensure everything works as expected. A reliable partner should provide 24/7 support services, with a team that is easily reachable through various communication channels, whether it’s a support desk, phone, or any other platform.
  5. Flexible licensing and pricing: It’s essential for both the supplier and the customer to have a sustainable business model where both parties can generate profit. Each customer is unique and deserves a tailored pricing or licensing model to match their specific needs, whether it’s through CapEx or OpEx.
    As we move forward in this dynamic industry, these principles will continue to guide how we at More Screens approach new opportunities and challenges, ensuring that we remain a trusted and innovative partner in the OTT space.

We’d love to connect with you at the upcoming IBC in Amsterdam! Visit us at our booth in Hall 1, F11, to learn more about Spectar+, a modular Multi-Screen OTT platform. If you’d like to schedule a meeting in advance, you can do so through our website www.morescreens.com. We look forward to seeing you there!

 

 

 

IABM Student Bursary scheme – 15 years of fostering new talent for the Broadcast and Media industry

IABM Student Bursary scheme – 15 years of fostering new talent for the Broadcast and Media industry

Stuart Ray, Head of Skills and Development, IABM

Now in its fifteenth year, the IABM student bursary scheme offers students from some of Europe’s top Media technology courses a Delegate Pass to IBC in Amsterdam.

At IBC24 students from RheinMain University in Wiesbaden, Solent University in Southampton and L’Université Polytechnique Hauts-De-France Valenciennes will once again have a chance to experience everything IBC has to offer – from wandering the many Exhibitor Halls to attending conferences.

Over 100 students have benefitted from the scheme since its inception, with travel, hotel and daily expenses covered by the IABM, together with close mentoring from the IABM team throughout the event to ensure they get the most out of the experience.

 

Can you remember your first time?

Your first IBC is always an experience unlike any other – the sheer magnitude of the exhibition can be quite overwhelming. But it is a fantastic opportunity to build networks and find out what is happening at the cutting of broadcast and media technology.

We’ve been in touch with a few of the successful bursary students over the years to find out their memories of attending IBC with the IABM scholarship and how it has impacted their career choices.

For most it was their first time visiting. “The scale of things was just immense and being thrown in from student life to the deep end of the broadcast industry and seeing every technology and software and different workflows available just in one place was amazing,” was how Daniel George (2019 Bursary Student, Solent University) described it.

Dan Ashley-Smith (2010 student, Ravensbourne University) said: “I had already attended other broadcast exhibitions in the UK prior to visiting the IBC (such as the ‘Video Forum’ at Earls Court) but was shocked to see the difference in scale – IBC was huge in comparison, and felt overwhelming when you saw just how big and diverse our industry really is.”

For Fatih Akkoc (2019, RheinMain University, Wiesbaden) it was the realization that Media Technology was not necessarily the “little, niche course topic” he had thought it was. “It was on an international scale and we had (IABM Bursary) students from France, from England, from St Petersburg – it was really great to get in touch with them and find out their background, what they study – and you can always learn something from other people.”

Many of the bursary winners commented on how the experience of going to IBC opened their eyes to aspects of the industry they had not really considered as potential career paths. “I think it really helped me,” says Craig Garder (2010, Ravensbourne). “You learn so much in your ecosystem and Ravensbourne did a really good job of making sure it’s hands on and making sure that you’re going out and getting work experience. And that’s a really valuable thing, but you’re so focused on your learning.  And then you start to hear some of these larger concepts, and you start to understand them and you get immersed a little bit more, and then to actually go and see it and touch it and talk to people who are in it, I think that just really solidifies the whole thing. It just makes it more complete in that regard.”

Dan Ashley-Smith says, “I think for me, it was obtaining a broader and contextualized view of what made up our industry, because there were so many different elements to see. Even if it wasn’t necessarily in areas that I was focused in on, it was still good to have visibility of what was involved (be it on the production or delivery side.) It was also good to get a heads up on what areas to start learning more about, see the direction in which the industry was moving in, and start focusing in on areas such as the IT aspect of broadcast. I decided to specialize in IT when I was at Ravensbourne, off the back of my trip to the IBC, because I realized just how important those skills were going to be, and how in demand they would be in years to come. Judging by the dependence on IT throughout the projects I’ve worked on since, I definitely made the right choice.”

Daniel George: “I reflected on it as being a really kind of profound kind of turning point in that there was a lot more to this industry than I realized. That really broadened my mind as to what I actually potentially wanted to pursue full-time.”

Any first-time visitor to IBC can find the experience daunting, but for students it can feel doubly so. Alexandre Lavaud, a graduate of the Université Polytechnique Hauts-de-France in Valenciennes, and a bursary winner in 2022, observed: “it was very impressive of course for us because we were just little students going in the big professional world so it was very difficult to feel like legitimate people there… but the main point for me was to feel more confident to find a bigger company that I’ve ever thought about to work in after my studies.” Alex now works as a Digital Compositor for MPC in Paris, “…and I’m quite sure that before attending IBC 2022, I was not even thinking about trying to work here.”

Happy Talk

It can feel intimidating as a student at IBC, wondering if people on the stands will have the time or inclination to talk to you. But most of our bursary winners found exhibitors very open and willing to engage. Indeed, some felt it was quite a nice relief for those exhibiting to be able to have a break from being in ‘hard sell’ mode all the time.

“Going in as a student was more helpful in learning about technologies and things we didn’t have exposure to before because it meant that the people working the stands weren’t primarily focused on selling. They were more than happy to explain because they’d already identified you as not someone that they thought they were going to get a multi-million-pound contract out of. But they were still happy to talk and have that conversation. So you could benefit a little bit more not having been approached from a sales perspective,” says Daniel George.

Craig Gardner agreed: “Obviously, if a client did walk up to the stand, they were going to move over to the client thing. So you’ve got to be prepared for that. But at the same time, there’s probably an element of relief that they can just have a regular conversation and actually teach someone something.”

Fatih Akkoc (2019, RheinMain) felt that some exhibitors were more willing to talk to him because he was a student. He pointed out that bursary winners can “use this privilege of being a student because people will look at you as okay, you’re someone who wants to learn something. You’re not the competition, you’re not from another company. Everywhere I went, I talked to people, I asked them questions and they were so open and wanted to talk.”

Daniel George added: “And the fact that it was a much more open-minded discussion rather than a sales pitch, which definitely boosted the educational factor for me. It meant I could actually ask questions and get an honest answer rather than, you know, a competitive sales answer. And we found that they are always really kind of open to reaching out and actually engaging because at the end of the day, you are the next generation of talent who are going into the industry.”

The opportunity offered by IABM is greatly valued by the partner universities, with a rigorous selection process in place to choose the lucky winners. “The networking opportunities with engineers and executives of industrial companies has done a lot to boost their motivation, professional orientation and confidence to work for our industry”, said Professor Wolfgang Ruppel of RheinMain University. “We are very proud and honored to have been a partner university of IABM for so many years.”

Michel Pommery, co-ordinator of the Media Technology department at Université Polytechnique Hauts-de-France added, “The IABM Student Award selection process is one of the highlights of our academic year. Not only does it enable some of our students to attend IBC in Amsterdam, a valuable experience in itself for anyone intending to work in the broadcast and media sector, it also allows us to insist upon the importance of regular technology watch activity in order to keep up to date with innovations, along with the ability to communicate well in English. The skills they develop throughout the selection process are an asset in their professional life.

“The award winners always appreciate their time at IBC which gives them a real opportunity to discover the latest technologies and trends, and exchange with professionals and other students from around the world. This very often leads to informative discussions and projects with their classmates (and indeed lecturers) after the exhibition as they share what they have seen and learned.”

 

Making the most of It

We asked our former winners to give their top tips for the ten IABM Bursary students attending IBC in 2024.

As IABM bursary students have a pass which includes access to the Conference programme, many commented that this was a chance not to be missed.  “The conferences were really great,” said Fatih Akkoc. “I remember there was a Hollywood panel where Warner Brothers,  Disney and also some other executives were sitting there and talking about the Hollywood vision for 2030. It was really great to be there and listen to the big players.”

Daniel George observed that “Going along to the conference panels was really interesting – things like major asset management in complex international workflows and the kind of stuff that’s quite niche, but it was really good to get an appreciation for different parts of the industry.”

Craig Gardner remembers attending a particular session in 2010. “They were talking about product placement replacement. So if you had a Coke can, you were able to replace it with a Pepsi can. It just felt so interesting. It’s so funny because I watched that talk, super interesting, and then walked away with it thinking I have no idea how that is going to impact me at all. And now I’m talking to companies that do product placement! I’d seen it 14 years ago and now we are looking at implementing it into some of our workflows. So I think if future attendees have the opportunity to attend the conference and pick out things that sound interesting or pick out things that they don’t know anything about, I think it’s a valuable use of their time.”

Another common recommendation was to plan ahead – study the programme or use the app to identify particular companies or conference sessions you want to see. “Definitely do a little bit of prep work. It’ll pay dividends,” says Craig Gardner.

“You should be prepared,” Fatih Akkoc says. “There are these exclusive panels so I suggest to go ahead read through the program and pick some interests.”

Christoph Gerdener (2019, RheinMain) agrees. “I would definitely say get the program and look through everything, mark everything that you think is interesting. I remember very, very clearly there was one session on career development advice for students with BBC, ITV, Sky etc. And I was the only student that showed up! I had five different people from five different broadcasters, just talking to me for an hour!”

Others highlighted the international aspect of the Bursary programme and the benefits that can bring. “First and foremost, get to know the people you’re there with,” said Daniel George. “It makes your experience both as a delegate and as a visitor to Amsterdam so much more beneficial when you’re able to go around the exhibits as a group. It’s a lot less intimidating when you’re travelling as a group and kind of moving as a pack rather than, you know, just one sheepish, fresh-faced student there yourself.”

And all were clear on one thing – the opportunity to network and make connections is too good to miss.

Christoph Gerdener commented: “I think at the beginning I was somewhat shy, but then I like was just, get over it! Hey, here’s my LinkedIn. Can you just add yourself? And so I still have like a lot of like those connections there.

“And you never know when that might come in useful. It’s very much an industry where people know each other. The network thing is really important.”

“Speak with people, build contacts, try and organize work experience with companies if you can,” says Dan Ashley-Smith. For Fatih Akkoc, due to complete his Masters in the next few months, it’s also about future planning. “Go ahead, talk to those people who are professionally there and ask them, okay, what are you doing for a living? And get a profile. It can give a much, much clearer vision for their own future. Because for a student that’s the biggest question of all – what am I going to do with my degree afterwards? So, while networking with others you should also focus on that.”

See you at IBC

There will be ten IABM Bursary winners at IBC this year. If you see them please do say hello. After all, it’s down to YOUR membership of IABM that we are able to give them the opportunity to be at IBC. And if you can offer any demos or time to formally meet them please do get in touch and let us know via training@theiabm.org.

 

Where are they now?
The former bursary winners who contributed to this article are:

 

Fatih Akkoc – Cloud Engineer, SWR, Germany

Dan Ashley-Smith – Freelance Broadcast Systems Integration Engineer

Dan George – Operational Production Engineer, BBC, London

Craig Gardner – Snr Director Media Operations NBC Universal, USA

Christoph Gerdener – Live IP Engineer WDR, Cologne, Germany

Alexandre Lavaud – Digital Compositor, MPC, Paris, France

 

GrayMeta – Video QC in the media supply chain goes beyond pass/fail

GrayMeta – Video QC in the media supply chain goes beyond pass/fail

Dan Daube, VP Customer Success, GrayMeta

Traditionally, Quality Control (QC) processes required substantial hardware setups and dedicated physical spaces within studios or production facilities to determine if a program could be broadcast. The prevailing methods used in these suites were relatively simple measurements. An asset was measured and could be over or under specific video or audio values. If the values fell into range it was thumbs up, if the limits were exceeded then it was thumbs down.

It airs or it doesn’t.

Fast-forward through a flurry of technology advances, a veritable tsunami of new regulations and requirements, a real tsunami, later a pandemic, and along the way an emergence of new distribution methods. Each new method required more checks, measurements and validations to be done.

Media companies need innovative solutions for all these changes. In fact, all content creators want to streamline their operations, reduce costs, and ensure the quality of what they distribute via any of the distribution methods that exist, FAST (Free Ad Supported Streaming TV) channels, VOD, Streaming, Web, Mobile, Cable, Satellite and yes there’s still over the air broadcast.

  Meet the family

GrayMeta’s Iris QC family of applications are advanced solutions that integrate both media playback, validation, and quality control capabilities. These applications accurately verify and measure audio and video content against industry standards, making each a trusted tool across well-known networks, studios and post houses.

Iris QC Pro is the flagship desktop-based application providing complete measurement, validation and quality control for all global standards.

Iris Anywhere QC provides a browser-based experience with virtually all the desktop features to enable cloud-based workflows or on premises workflows that rely on S3 compliant storage.

Iris Play is the newest application that provides a low-cost, timecode accurate player and basic QC workflow tool for asset validation and approvals.

Level with me

The cornerstone of asset QC is still the measurement and verification of video levels, that proper color reproduction is present, and that audio isn’t distorted or out-of-phase. On that foundation, though, there are many variations as standards and requirements vary around the globe and even within regions or as dictated by platforms. Loudness measurements, closed-caption sync, Dolby audio and video encoding, frame rates, frame sizes, codecs, formats, and language use all require measurements, validations, checks, and review.

Whether it’s Iris QC Pro or Iris Anywhere QC, each provides detailed analytics, review and reporting features regarding virtually any video, audio, caption, metadata or compliance issue. Iris will playback virtually any video codec, wrapper, audio format and supports over 20 closed caption/subtitle types. (In fact, Iris supports 2 caption types only used for in-flight or shipboard entertainment assets)

  Metadata matters

Technical and asset metadata are read directly from the file in Iris and these values or measurements can be templated to provide an indication of Pass / Fail upon opening the asset just to get started. If you already have an automated QC report, Iris enables the import of all the data from these vendors: Interra Baton, Venera Pulsar, Tektronix Aurora, Telestream Vidchecker, and Telestream Cloud. Now each point can be reviewed with “eyes-on-glass”.

Beyond the traditional QC metadata, temporal or time-based metadata is just as important to check prior to distribution. Everything from credit placement, commercial breaks, show opens, recaps, narrative subtitles to color bars, slates and total running time are all necessary for airing on most platforms and all must be checked.

For quick checks of existing temporal metadata, the Iris Document Viewer can import a CSV, JSON, XML or basic EDL with an asset and all timecodes can be checked with the touch of a mouse to jump to the corresponding point right in the player. For creating or checking more comprehensive temporal metadata, the Iris Annotation tools and Segment data can be harnessed to create, correct, edit or delete any temporal or compliance metadata for an asset.

  You’ll never work alone
The Iris applications include Iris Admin Server for ease in management of licenses, users and environments, ensuring flexibility as no user is restricted to a single machine. Additionally, the Iris Admin Server promotes collaborative work through features like annotations, project sharing and API integrations to your media workflow infrastructure. Projects can be shared between groups or users, assigned programmatically and are a key element in automating your Media Supply Chain workflows. Plus, users can create quick review frames to share, make comments or updates to help others in any part of the QC process.

It’s not easy being green

Adopting comprehensive software QC solutions like Iris QC Pro and Iris Anywhere QC contributes significantly to energy savings and sustainability goals. By eliminating the need for hardware-intensive setups and the real estate to house such equipment, media companies can easily reduce their energy consumption. This not only aids in compliance with global environmental standards but also aligns with the corporate responsibility goals of many organizations.

One more thing

Both Iris QC Pro and Iris Anywhere QC are now equipped with ultramodern AI (Artificial Intelligence) that performs language detection on all audio tracks reducing manual workload and minimizing human error. Language detection for captions is coming very soon.

Iris QC Pro and Iris Anywhere QC are at the forefront of transforming the media supply chain into a more flexible, efficient, and environmentally friendly industry. These platforms not only ensure high-quality media content but also support the scaling needs of a global market. By reducing the dependency on physical assets and using cloud capabilities, these tools offer a sustainable alternative that aligns with the future of media production and distribution. As the industry continues to evolve, embracing such innovations will be key to staying competitive and responsible in an evolving video-driven world.

Godox KNOWLED App: Simplifying Light Mapping for Virtual Production

Godox KNOWLED App: Simplifying Light Mapping for Virtual Production

Lighting control is crucial in film and television production, directly influencing the quality of each shoot. The Godox KNOWLED app emerges as a game-changer, providing an efficient and intuitive lighting control tool that incorporates the latest technological innovations.

As the final piece in perfecting the film lighting ecosystem, renowned lighting equipment manufacturer Godox has introduced the Godox KNOWLED app for tablet lighting control. Available now on both iOS and Android platforms, this app is built on the universal DMX protocol, making it capable of managing all DMX-enabled lights, including adjustments for brightness, color, and color temperature. This professional lighting control application is designed specifically for filmmakers.

Users can build their own light library, regardless of the brand, ensuring flexibility and customization in lighting setups. This feature is particularly valuable for productions using a diverse array of lighting equipment.

Streamlined Light Mapping

The Godox KNOWLED app revolutionizes light mapping, making it exceptionally user-friendly and efficient. Cinematographers, DPs, and gaffers can now perform light mapping directly from a tablet. By importing video clips of a scene into the app, the on-set lighting dynamically changes in real-time to match the video background, enhancing the realism of the film’s visuals. Users can drag the light sampling points on the tablet to the desired positions and individually adjust the size, brightness, and color temperature range of each sampling point. For smaller scenes, this greatly simplifies complex configurations and reduces the learning curve, ultimately lowering shooting costs and enhancing workflow efficiency, thereby allowing for more creativity.

Integration with Virtual Production

As virtual production technology continues to grow, the Godox KNOWLED app’s light mapping capabilities provide robust support for this innovative field. Accurate lighting control is vital in virtual environments, and the Godox KNOWLED app excels in this aspect. It allows precise adjustments, enabling users to simulate natural light and create special effects seamlessly within virtual scenes. This integration enhances production efficiency and offers expanded creative possibilities.

User-Friendly for Beginners

The Godox KNOWLED app is designed to be accessible not only to seasoned professionals but also to beginners. Its straightforward interface and easy operation enable users without extensive technical backgrounds to quickly learn light mapping. With simple touch controls, achieving complex lighting effects becomes intuitive, significantly reducing the learning curve.

Art-Net Wired Connection for Stability

While wireless control is highly convenient, professional film sets often require the stability and responsiveness that only wired connections can provide. The Godox KNOWLED app supports Art-Net wired connections, favored for their fast response times and reliable stability. This ensures precise and consistent lighting control, crucial for high-stakes production environments. For large-scale projects or scenarios demanding utmost reliability, Art-Net wired connections are the optimal choice.

Ideal for Small Studios

For small studios, the Godox KNOWLED app offers remarkable benefits. Its wireless control capabilities simplify equipment setup while providing powerful lighting control, allowing small studios to achieve professional-grade lighting effects effortlessly. Whether for product shoots, commercials, or independent films, the Godox KNOWLED app proves to be an invaluable asset.

In summary, the Godox KNOWLED app brings a revolutionary approach to lighting control in the film industry. By integrating cutting-edge technology with a user-friendly design, it meets the needs of both professionals and beginners, enabling efficient, precise, and creative lighting management. This app offers unlimited possibilities for enhancing creative production.

TotalMedia’s AI exploration for broadcasting

TotalMedia’s AI exploration for broadcasting

Huang Jin CTO, Arc-video,TotalMedia, Danghong

Introduction

The world of broadcasting has seen incredible change, from silent films to the high-definition digital content we enjoy today, accessible across various platforms. Now, the industry appears ready for another exciting evolution, driven by the potential of Artificial Intelligence. TotalMedia is actively involved in this exploration of AI’s possibilities for broadcasting. We focus on using AI to enhance various aspects, aiming to improve efficiency, audience engagement, and creative freedom. We believe our solutions can contribute to shaping how audiences experience media in the future.

TotalMedia’s software-defined video pipeline seamlessly adapts to a wide spectrum of broadcasting needs, empowering broadcasters with unparalleled flexibility and customization options. It seamlessly integrates with existing workflows, leveraging the power of widely used CPUs and GPUs to enable flexible deployment, whether on-premise or in the cloud. And GPU acceleration of demanding AI computations allows seamless integration of a wide range of video AI features, giving broadcasters a significant edge in efficiency and creativity.

Compression and workflow efficiency

Imagine enjoying your favorite shows in stunning quality without worrying about data usage. TotalMedia’s innovative content-adaptive encoding achieves this by reducing bitrate by over 20% while maintaining exceptional visual quality compared to traditional encoding methods. This translates to a smoother viewing experience for audiences who consume less data, and significant cost savings for broadcasters on bandwidth.

TotalMedia further streamlines workflows by automating tasks with video analytics. Processes like metadata tagging, content management, and moderation become automatic, transforming vast video libraries into easily searchable, well-organized resources. Creators benefit from this automation by focusing on storytelling instead of administrative duties. This increased efficiency not only boosts productivity but also fosters a more dynamic and creative broadcasting environment.

AI-driven enhanced viewer experiences

 Today’s audiences crave exceptional broadcast experiences. TotalMedia delivers with innovative AI models that elevate viewing in several ways:

Richer Colors: Yearn for those vibrant hues? TotalMedia’s AI employs color cast correction, tone enhancement, exposure correction, contrast adjustment, and saturation enhancement to create a visually stunning and immersive experience.

Sharper Details: Low resolution content often lacks the crispness of modern broadcasts. TotalMedia’s AI-powered super-resolution technology upscales content to HD and 4K resolutions, breathing new life into classic favorites.

Smooth Action: Fast-paced scenes and sports demand seamless motion. TotalMedia utilizes Generative Adversarial Networks to enhance smoothness and stability, resulting in exceptional frame interpolation and a captivating viewing experience.

Movie Restoration: Film grain, scratches, and faded colors can ruin the enjoyment of older movies. TotalMedia’s AI intelligently identifies and removes these imperfections, restoring films to their former glory.

By ensuring smoother, clearer, and more vibrant video playback, TotalMedia’s AI advancements offer viewers an unparalleled level of engagement and immersion.

Creativity: AI generated content combined with traditional AI tools

TotalMedia explores more efficient and creative production workflows, by combining Artificial Intelligence Generated Content (AIGC) with traditional AI tools, to enhance video editing, Sports content production, NeRF 3D object creation, etc.

TotalMedia Video Matting supports background matting with AI background removal, and replaces it with AI generated image and video. This background removal, generation and replacement streamlines production processes and empowers businesses and individual users to craft videos effortlessly.

TotalMedia Slow Motion, Highlight Reel, and Smart Portrait Screen Streaming for Sports broadcasting, enhances viewing experience by delivering slow-motion and highlight reels for Sports programs, and generates portrait screen streams dedicated to mobile phone that focus on star athletes or ball-carrier movements.

Additionally, TotalMedia 3D support volumetric video and NeRF. It helps the capture, generation and interaction of spatial video content, and also supports 2D image to 3D object conversion, elevates the the viewing experience for XR. For example, the 3D volumetric videos could be exported to both 3D model and MV-HEVC video formats, and manipulate and playback in Apple Vision Pro.

Conclusion:

TotalMedia’s AI-powered broadcasting solutions are making a significant impact on media consumption. We’re constantly striving to improve efficiency, innovation, and user experience, pushing the boundaries of what’s possible in content creation and delivery. We believe AI has the potential to revolutionize the industry, and we’re excited to be at the forefront of this exploration, alongside other innovators in the field.

About TotalMedia

With over three decades of expertise in image and video core technologies, TotalMedia are proud to empower industries that heavily rely on video, including Broadcasters, TV Stations, Cable Networks, Telecom Operators, OTT, Content Providers, Education, and even emerging sectors like In-Vehicle Infotainment systems. To learn more, please visit www.totalmedia.ai

Huang Jin (graduated from Zhejiang University with a master’s degree), is responsible for hardware partners relationship and also led Smart City and IVI product development. His main experience is as follows: From 2003 to 2015, he served as R&D project director and technical architect of ArcSoft.

LTN – Looking beyond satellite and fiber: why it’s time to transition to IP

LTN – Looking beyond satellite and fiber: why it’s time to transition to IP

Roger Franklin, Chief Strategy Officer, LTN

In the pre-streaming age, video delivery and consumption were simpler. Feeds consisted of fixed content transmitted from a single source to a broad audience via cable or broadcast networks, and there was little room for targeted segmentation. Fast forward to today, and audiences are scattered across multiple streaming platforms, channels, and devices. This has meant that media companies face the challenge of creating diverse experiences that capture their attention while also tailoring content to deepen engagement — all while maximizing profitability.

Linear distribution methods, like satellite and fiber systems, had their place in the evolution of video distribution. However, they fall short when addressing the complexities of the modern media landscape. IP video distribution has now cemented itself as the leader in this space, offering media companies greater efficiency, scale, and monetization that satellite and fiber can no longer match.

The disadvantages of satellite video distribution

It’s no secret that satellite workflows have become increasingly complex and untenable for today’s market. Media companies are grappling with the dual pressures of meeting insatiable content demands while facing the constraints of shrinking satellite bandwidth. As 5G wireless carriers repurpose satellite capacity, media transmission pathways narrow, leading to interference issues. Troubleshooting becomes time-consuming, and resource reallocation disrupts transmission traffic. The clash between content abundance and limited satellite resources necessitates a more agile solution.

As digital consumption of video content grows and evolves, media companies need the space to experiment with new channels and live event feeds quickly and efficiently. They also need the agility to modify them based on changing viewer preferences and the scalability to reach both global and hyperlocal audiences across multiple platforms and devices. Satellite video distribution is far too rigid to meet these demands, and it does not offer the flexibility required for navigating rapid changes in the media landscape.

The limitations posed by fiber video distribution
Fiber was intentionally designed for content to be distributed by one source to a select few contracted distribution outlets. This inherently limits its reach and customization capabilities. Media companies using fiber video distribution are missing key revenue-generating customization opportunities, such as being able to provide targeted, relevant audience segments.

 Vulnerable to signal loss caused by manufacturing flaws, environmental and construction interference, and general corrosion over time, the fragility of fiber is impossible to overlook. Unless reinforced with the installation of multiple fiber paths — which is complicated and costly to achieve — the lack of redundant routes can threaten video transport and lead to network disruption that can be expensive to diagnose and repair, requiring specialized service and equipment.

These issues could make satellite and fiber video distribution models too costly for media companies as it will stifle the potential for greater business agility and innovation at a time when the landscape demands it.

Greater efficiency and scale with IP video distribution

Public internet’s basic structure doesn’t include the native ability to multicast or support both low latency and high reliability. When there are periods of high use, this means that internet routing protocols can be overwhelmed by traffic, causing jams that can lead to packet delays or loss. With live video distribution, there are no second chances, and the economic implications for these issues can be costly. It is critical that media companies have a robust network that can mitigate these obstacles in order to retain and grow their viewership.

This is where the value of a proprietary IP network becomes invaluable for effectively addressing today’s challenges and achieving reliable, cost-effective delivery of live and on-demand content.

IP video distribution is giving media companies greater control of their full video chain workflows, enabling them to customize with efficiency and scale. Compared to the rigidity and expense of fiber and satellite video distribution, IP video distribution via a proprietary network enables media companies to tailor content efficiently and at scale. Unlike fiber and satellite video distribution or multicast transport, investing in the right IP video distribution infrastructure will empower simultaneous and multiple distribution and customization of high-value content worldwide. This scalable and efficient customization offers the opportunity to make every event and every second of a video stream an asset that maximizes monetization.

Driving long-term business growth

It’s not uncommon for media companies to have one foot in digital content delivery and the other in legacy linear distribution, seeking a hybrid approach that offers greater flexibility and revenue. Having a future-ready digital strategy in place that incorporates IP video distribution will be key to giving media companies that edge to greater monetization.

IP video distribution boasts new capabilities and advantages that go beyond what satellite or fiber can offer. The value of content is transformed by IP video distribution. Through the ability to deploy high-quality content, media companies have access to a new revenue stream that satellite is not able to provide.

Tier 1 media companies around the world are no longer viewing the transition to IP video distribution as something they have to do — it’s now something they want to do to expand their reach both globally and hyper locally, monetize video content, and ensure ultra-low latency and high reliability.

Imagine Communications – The balancing act

Imagine Communications – The balancing act

Lowering costs and increasing sustainability with cloud and on-prem playout systems

Andy Warman, CTO, Video, Imagine Communications

 

For today’s broadcasters, the choice between deploying services on-prem or in the cloud has become a critical decision. On the surface, it’s a simple one — on-prem systems require significant investments in physical infrastructure and time, while cloud solutions offer rapid deployment without the maintenance headaches.

However, if you dig a little deeper, the choice isn’t so clear-cut. To balance costs and their environmental footprint, broadcasters must determine when using cloud over on-prem solutions makes the most sense.

Cloud vs. On-prem systems

On-prem systems can take months to commission and install, while broadcasters can deploy cloud channels in minutes— and not be burdened with the time and expense of maintaining the hardware. However, the cloud is an on-prem system somewhere. It has physical infrastructure with an environmental footprint, and a provider responsible for maintaining it.

A good analogy is to compare the cloud versus on-prem decision to that of taking a taxi versus renting, leasing, or buying a car — each makes sense in certain scenarios. In this analogy, the cloud is like taking a taxi, as it allows broadcasters to use a resource to solve their workflow needs in real time. It is also like a rental car in that broadcasters can prearrange to use it for short periods, or they can make a longer-term commitment — a lease — and simply “hand it back” at the end of the term.

Also like a car rental or lease, there are hidden costs involved in utilizing the cloud, such as storage consumption and egress charges. For broadcasters using a resource 24/7/365, these hidden costs can make renting infrastructure a more expensive option than buying the on-prem equipment outright.

 Modeling costs and environmental impacts

Broadcasters generally don’t have full visibility into the cost and environmental impact of using a particular cloud resource, as it’s built on the idea of shared resources. However, cloud providers have deep pockets for R&D and investment to build greener datacenters, so their users do not have to make those investments themselves.

Cloud also offers a general boost in efficiency. For example, if it only needs to handle file transcode and transfers for a few hours a day, cloud capacity can be used by somebody else at other times, provided it is not a dedicated host/instance type.

In on-prem systems that are always running, keeping a computer powered on while only using it, say, 10% or 20% of the time is highly inefficient. But if that computer can be shared across other functions the other 80% to 90% of the time — as it could be in the cloud — efficiency is dramatically improved.

Power usage and efficiency

While it seems easier to determine power usage and efficiency on-prem compared to the cloud, the actual process is often misunderstood. For example, it’s common to confuse the power supply rating of equipment with its peak draw under maximum load.

To illustrate, Imagine uses 1,000W, platinum-grade power supplies in our PCs, rated at 96% efficiency. But six full-HD ingest channels running our most demanding codec at ~90% CPU usage draw only a little over 40% of the 1000W capacity of the supply. When the PCs run at idle, they draw about 30% of the total capacity. As this example shows, running PCs at idle most of the time is highly inefficient.

By understanding real-world numbers, based on the actual work a broadcaster has done, they can see how efficient they’ve been and make better sustainability decisions moving forward. If a broadcaster needs to add 10 channels for only two weeks a year, it is more sustainable to provision them in the cloud than on prem if cloud channels can fulfill the workflow requirements.

Equipment useful life

Beyond efficiency, how and when the cloud should be utilized can also be determined by the useful life of equipment. Servers used by cloud providers have a four- to five-year useful life expectancy. IT departments often replace infrastructure every three to five years, even though that equipment can last seven to 10 years. Some budget servers may have a lower life expectancy due to short service and support periods. Conversely, higher lifespans are also possible, as some vendors offer longer support contracts than OEMs do.

For 24/7 operations, part of the calculation when deciding between the cloud and on-prem resources is the cost of building the solution in the facility and operating it for at least the amortization period. For occasional-use channels, archive storage, or cloud-based transcoding workflows, cloud is likely a better option. This eliminates the need to purchase PCs that will only be used for a small percentage of the year.

Strategic recommendations

For broadcasters, an effective strategy is to focus on their core high-value channels and time-consuming resources, and run those on-prem. Where it makes sense, they can use the cloud as much as possible for everything else, including as a testbed for new ideas and to quickly launch new services. If necessary, these services can be brought on-prem to manage costs, workflow requirements, and redundancy considerations. The cost of the cloud is also in flux, and this is something broadcasters should keep an eye on to see if it becomes more viable for additional workflows.

The balance between on-prem and the cloud will change based on percentage of usage and the kind of workflows and solutions needed over time. The flexibility inherent to modern playout solutions can accommodate those changes. In fact, there is actually very little dedicated playout hardware, and what does exist — even in traditional SDI environments — can be reconfigured and repurposed using well-proven IP-based technologies based on SMPTE ST 2110.

The sustainability quotient for playout applications is as malleable as it is for operational considerations. The PC-based nature of these applications is a key enabler. Computers and their networks can handle a variety of tasks and be reallocated to new functions as needed down the road in playout and other workflows. As requirements evolve, broadcasters’ ability to efficiently use and adapt the roles fulfilled by both on-prem and cloud resources is essential.

 

FOR-A Europe – Connectivity for productivity

FOR-A Europe – Connectivity for productivity

Fabio Varolo, FOR-A Europe

The rapid adoption of IP connectivity for media is transforming our industry. It opens exciting new creative opportunities in remote production and collaborative workflows, thanks to reliable real-time transfers over the public internet. Remote working means fewer journeys for personnel and equipment, significantly reducing the carbon footprint of a production.

When managing the infrastructure for remote working, though, there is a significant limitation. IP connections are traditionally seen as point-to-point: source to one destination. That is the nature of streaming connectivity.

In broadcast and production that is inconvenient. It eliminates many of the advantages of remote working if you have to tell people they can either be at the location or at the master control. Producers, editors, compliance supervisors, script editors, subtitlers and more may be in multiple locations. Engineers may want confidence monitors at different points in the signal path.

For major drama productions, the action may be in a studio or on location, but the content needs to go to the post house in a distant city, and to the producers and execs who may be in other countries. If the shoot will need extensive visual effects, the facilities designing the VFX, rendering them and conforming the results could be spread around the world.

In short, then, to really win the full benefits of IP connectivity, there has to be a way to distribute content over a lossy network – the public internet – securely, with low latency, and to multiple destinations. Such a system needs to be based on open standards for full interoperability.

At FOR-A we developed a very capable software platform, SOAR-A: Software Optimised Appliance Revolutionised by FOR-A. The box provides all the key functionality for IP connectivity and allows developers to create powerful applications to meet specific requirements.

Among the leading innovators with which FOR-A has collaborated on applications for SOAR-A is SipRadius, which has developed a highly performant solution for the point-to-multipoint requirement, simply and seamlessly.  By incorporating this advanced solution, the SOAR-A EDGE IP transport appliance, part of the SOAR-A platform, provides a seamless bridge into and out of the streaming connectivity environment.

To guarantee open interconnectivity, transmission uses RIST, the Reliable Internet Stream Transport. This is a very widely used standard. The content arrives at the SOAR-A EDGE portal as SDI, SMPTE ST 2110, NDI ® or as an already compressed stream. Everything in the incoming feed is packaged into the RIST stream: identification and discovery in NDI; timing and auxiliary data in ST 2110 and more. In a simple implementation, content comes in at the remote location, is packaged and sent via a RIST tunnel over the internet and received at the base.

A single SOAR-A EDGE can handle multiple concurrent streams, which might be all in one direction or it could one or more return video channels. That return video might be the interviewer for down-the-line interviews, or it might be a multi viewer for the remote producer. SOAR-A EDGE units can be interconnected to transmit larger numbers of channels from a location, with automatic load-balancing to ensure maximum performance, and can deliver to many peers simultaneously.

Importantly, RIST includes 256-bit AES encryption, so all media in transit is completely protected. So, this gives you a very low latency multi-channel circuit which is highly resilient against the undeterministic nature of the internet and hardened against piracy and cyber-attack.

What makes this solution unique is that it also incorporates a WebRTC encoder, providing a way for multiple users – potentially a very large number – to see and hear what is happening. The WebRTC feed is, of course, protected by the same high-security AES encryption, along with a rigid access management layer ensuring only those who are involved in the production can log on.

This feed can be accessed via a conventional web browser on any device. For those who want a simple receiver, you can build a set-top box using a Raspberry Pi processor with a client, giving a simple and intuitive user interface.

The beauty of RIST is that it can create networks with multiple access points. To achieve this level of flexibility, each SOAR-A location is connected to a RIST server in the cloud. This accepts material from each SOAR-A device, with individual streams routed to the required destinations.

That includes routing content from one remote location to another. This would be extremely useful at a major international sports event, for example, connecting multiple sources at a stadium with multiple sources at a remote studio, and sending the packaged programme back to the broadcaster’s headquarters.

With RIST in the cloud, it is responsible for creating the WebRTC feeds, based on user selections of what content to deliver to those monitoring the event remotely. These feeds include metadata alongside high quality audio and video.

It is clear that the broadcast, media and events industry is moving towards ever-more complex workflows involving key personnel in multiple locations. This architecture, using specialized devices and open-standards interconnectivity, gives users the flexibility and agility they need, without compromising latency or security.