Samsung’s SmartThings Home Automation Platform Is So Unreliable, They Should Stop Selling It

Screen Shot 2016-01-14 at 1.36.05 PMIn addition to being interested in streaming media technology, I also test out and use a lot of home automation platforms and security cameras. I have systems from Canary, Sharx, Logitech, D-Link, Nest, Arlo, Piper, Archos, Wemo, Smart Things, Wink, Wirelesstags.net and others that get well tested in my household. And while I don’t usually blog about home automation technology, my experience with the SmartThings platform has been so horrible, I wanted to make my comments public in the hopes that Smart Things, or someone from Samsung, will actually care enough to fix it.

For over a year now, SmartThings has struggled to do something simple, turn on and off my lights. I have three lights setup to go on at 4pm and scheduled to turn off at 11pm. And yet, twelve months later, SmartThings still isn’t reliable. Sometimes the lights work perfectly, other times they never work for days at a time. Sometimes only one will turn on and off, while the others don’t do anything. Making the problem worse, SmartThings, which is owned by Samsung, doesn’t even have a support number you can call for help. All support is done in a chat window, which makes the support process very cumbersome.

Months ago, after complaning on Twitter to the CEO of SmartThings, he had someone from support call me, and since then, I’ve had three different calls with support. And yet still, the lights don’t turn on and off correctly. On each support call, they have acknowledged that something on their end isn’t working right, and they’ve never fixed it. I’ve been told that the issues I have are due to some “platform issues” as well as  “outages” and that sometimes “devices act up.” After they think they have fixed it, they have emailed me to say that “things should be considerably smoother now than the last week or so,” except they aren’t.

The SmartThings platform simply does not work. And if I can’t rely on it for turning lights on/off, how would I ever trust it to do things involving security in my home? Another thing that doesn’t work right is their iPhone app. Every couple of days, it asks me to login, not keeping any of my info stored. And many times, the light icon will show green, indicating it is on, when in fact, it is off. Even SmartThings support told me that the way their system works, if the action does not trigger, it won’t then see any future scheduling you have set up. So it if breaks once, it won’t let any of the other actions fire that you have scheduled. Of course, this makes no sense at all and Smart Things support agreed that it is something they need to “work on”.

SmartThings advertises their platform as “intelligent living” and yet, it can’t even turn on/off lights reliably. There is nothing smart or intelligent about their platform at all. It is hands-down the most unreliable technology I have in my house, other than the Nest smoke alarms, which I already sent back last year. Samsung should make SmartThings fix their platform, or just stop selling it all together.

Facebook Details How Their CDN Works, Discusses Latency, Scaling, and Caching

In an interesting post yesterday, Facebook took to their blog to shed some light on how their CDN works for delivering live video for their recently released Live for Facebook Mentions offering. The post discusses how they handle scaling, latency, encoding, edge caches and proxies. You can read more about it here.

Streaming Video Alliance Ends 2015 With 38 Members, New Working Groups

In November, members of the Streaming Video Alliance meet at Fox Studios in Hollywood to define the future of online video and celebrate the Alliance’s one-year anniversary.

SVA at Fox Studies Nov 18 2015

The alliance welcomed new members including Concurrent, Encompass Digital Media, IneoQuest, Mobolize, NBCUniversal, Verimatrix and Vubiquity. Total members in the alliance now stands at 38, with more coming on board in the new year.

Screen Shot 2016-01-12 at 8.56.57 PMThe association is hard at work evaluating many subjects for working groups including Ad Insertion and Audience Measurement, Client Dev Framework, Encryption/Privacy, Geo Caching, Accessibility and Scaling amongst others. We will announce the formation of the final working groups in the new year. If you have any questions on joining the alliance, please reach out to me at any time.

Find Anomalies: It’s Time for CDN’s to Use Machine Learning

CDNs play a vital role in how the web works, however the volume, variety and velocity of log files that CDNs generate can cause issue detection and mitigation to become exceedingly difficult. In order to overcome this data challenge, you need to first gain an understanding of your normal CDN patterns, and then identify activity that deviates from the norm—in real time. This is where machine learning can take CDN services to the next level.

The Challenges
CDN operators ought to be immediately notified about sudden increases in bandwidth consumption at the PoPs or proxies in a network in order to take corrective action. The identification process begins by understanding which sources (customers) were the causes of abnormal peaks in bandwidth consumption. Without the ability to quickly obtain insights from fresh data, you won’t be able to foresee and prevent issues from escalating into full blown headaches.

Another challenge arises from CDN system upgrades. In order to run A/B testing on specific segments of proxies, it is best to gradually deploy system upgrades. That way, you can better prevent the possibility of widespread errors. Today, CDN providers still lack the real-time visibility that is needed to address basic issues such as an increase in HTTP errors, IO access or cache churn rates. This pitfall results in delayed upgrade releases, which in turn directly impact CDN providers’ pace of innovation.

I recently had an interesting discussion about these challenges with David Drai, Founder of Anodot. In the past, Drai co-founded Contendo, a system that optimizes CDN consumption. In 2012, Contendo was acquired by Akamai and Drai became the CTO of EMEA at Akamai. David said one of the main cases he remember revolved around a version release. “We had this bug that we didn’t uncover during an A/B test of the version and we released it to all network proxies. As the person in charge, it was a complete disaster.”

Log Analytics Is Just Not Good Enough
In order to cope with these challenges, CDN providers leverage log analytics systems that gather and record billions of transaction logs from relevant proxies. In most cases, these tools are built in-house and are used to run queries and retrieve insights about network performance. But that doesn’t suffice. In some cases, these are legacy systems that don’t scale and these tools are typically not intelligent enough to automatically provide results in real time. Therefore, a report that is generated may be based on relatively old data, which, in turn, would result in delayed or outdated responses.

“Say a CDN operator wants to get information about a specific customer’s RPS consumption rate per proxy and per PoP for a whole month. To obtain this information, the log management solution needs to scan billions of customer logs and extract the desired customer’s transactions, which can take days,” Drai explained. Another challenge CDN providers face is related to visibility into the operator’s network performance. CDN providers use tools such as Keynote, Gomez and Catchpoint that measure network latency, to switch to other providers if and when the need arises. However, although these solutions provide insights in real time, it is still a challenge to correlate current issues with an operator’s performance.

David says that “when dealing with CDN issues, time is of the essence. The user download rate of one of our gaming customers at Contendo decreased by 10-15% due to an issue that took us almost a week to detect. In the world of CDN, that kind of delay can significantly damage the CDN provider’s reputation.” Last but not least, one of the main issues with most traditional analytics systems as well as modern log management tools is manual, preconfigured dashboards, reports and alerts. In the dynamic world of CDN, there is no limit to new issues and information that can be extracted from the vast amount of available data.

We Need a Different Approach
What if we could predict a bottleneck in an internet router not solely based on simple BGP rules, but on true science? Over the last decade, data analytics technologies have evolved from complex and cumbersome solutions to modern and flexible big data solutions such as Hadoop. Over the last few years, these big data technologies, including Cassandra and MongoDB, have gained the industry’s trust and have become an important component in every IT environment. The next step involves incorporating new analytics solutions that allow you to run queries on top of these data engine – think about Google analytics for CDN providers.

However, with all of the great advancements that are being made in the realm of big data, the existing monitoring tools aren’t enough. As noted above, current monitoring tools are based on human analysts that define and create flat reports and dashboards. Even with numerous different reports, when it comes to CDN patterns, reality has proven that you can’t cover all cases and be notified in real time about current abnormal behavior development. It is simply not feasible when you talk about tens of thousands of different data points in multiple dimensions.

The next step in the world of analytics is machine learning. The ultimate solution is to automate data-based learning, then develop insights and make relevant predictions. This new discipline involves running pattern recognition algorithms and predictive analytics. And while it may seem far fetched, it is already in the works by Drai and his team at Anodot. They are aiming to solve the CDN challenges outlined above as well as additional use cases that predictive analytics solutions can help with. David says Anodot’s algorithms learn and continuously define CDN normal behaviors, and can therefore send out alerts about anomalies and automatically correlate between different data points. For example, the system will alert only if there is an increase in the number of HTTP errors across several proxies. “The key is zero human configurations.”

Screen Shot 2016-01-12 at 5.36.25 PMFinal Note
In a previous article, I wrote about Apple’s multi-CDN strategy. Think about a world where advanced analytics systems predict bottlenecks and automatically route traffic to its most appropriate CDN and optimal proxy. Predictive analytics can be a great solution to the challenges that CDN providers have faced for years now and it seems like machine learning solutions such as Anodot will be able to take this industry to a new level, creating great new development opportunities into a world that has suffered from complexity and a lack of visibility for years.

Comcast Expands Their Commercial CDN Offering With Live Linear Streaming Platform

Comcast_WS_L_4C_COLOR_BLKLast May, Comcast launched their commercial CDN offering and since that time has signed up some pretty large content owners, while also expanding their product line. Their latest addition is a new live linear streaming service, announced at IBC 2015. The new platform provides customers the ability to easily turn-up, manage and enable ad or subscription-supported live, over-the-top, full-time channel experiences. Here’s a link to a workflow chart that maps out what Comcast’s solution looks like.

While there are a lot of vendors in the market with platforms for managing over-the-top channels, only a handful truly own an end-to-end ecosystem. Comcast now joins Level 3 and Verizon Digital Media Services, as those who own and operate the entire product line, spanning linear acquisition through distribution and playout. Because of Amazon’s purchased of Elemental, some might add them to the vendor list as well, but they don’t provide linear streaming solutions in the marketplace to the same degree as the others, with a big differentiator being their lack of linear source acquisition capabilities.

A core component of Comcast’s live linear platform to manage the channel metadata and orchestrate workflow is based off of technology from thePlatform, the company Comcast purchased in 2006. thePlatform and it’s core content management and OVP cloud engine supporting both live and VOD workflows, called mpx, which is now part of Comcast Wholesale and it is an integral component of the Comcast Wholesale’s new online video product portfolio. While the live linear platform market is starting to get crowded, Comcast has a leg up amongst most of their competitors since they have been doing linear content aggregation for more than twenty years.

And not just for Comcast or NBC channels, but for many others like the NHL Network, Pac 12, movie networks and the like. Comcast is originating and distributing these live channels over satellite, fiber, and the Internet to other MSOs, MVPDs, and directly to consumers. The benefit of having this experience gives Comcast strong operational capabilities and a wide array of ingest and acquisition possibilities. This makes it really easy for Comcast to pull in the signal and then transcode, package and distribute it. The live linear workflow is in Comcast’s DNA, which isn’t the case with some of their other competitors.

Comcast has also been in the streaming OTT space for years, with their own offering, so they really straddle both sides with experience from the broadcast and Internet worlds. And they have a lot of experience in monitoring video quality, translated from a linear world to a streaming offering. Another strength of Comcast’s platform is the ability they have in being able to turn up customers very quickly. Comcast has over 500 existing channels available in their broadcast operations and origination facilities today and any of those channels can be made available over-the-top within a week. Any feeds not already in Comcast’s facilities can be acquired over satellite, dedicated fiber and contributed over the Internet very quickly including via leading third-party providers like Level 3’s Vyvx, LTN Global Communications, Digital Comm Link, etc. Some large premium networks and broadcasters are already using Comcast’s platform for live linear streams, as well as live to VOD, and it sounds like Comcast will be able to announce who some of these customers are shortly.

Even though Comcast is offering an end-to-end platform and has over 100 POPs in the U.S. for CDN delivery, the company’s platform and strategy is flexible. Content owners can use Comcast for just the video workflow, and then use another third party CDN of their choice. The same goes with Comcast’s player development kit (a component of mpx), which also supports third party players, analytics systems, stream conditioning, metadata, ad decisioning, content syndication rules and content protection. Comcast can provide it as a full turn key solution, or sell it in a modular feature set as a point solution. As for pricing, Comcast says they are very competitive on pricing for live linear workflow as well as delivery and some of the latest broadcasters who have moved over to Comcast’s CDN, have told me their performance for video delivery beats every third-party CDN around, which makes sense considering Comcast owns the last mile.