Machine Learning News Roundup – 6 Essential AI Articles of 2019

In this machine learning news roundup, we will go over some of the biggest news from 2019 that went viral or made an impact in various fields of AI. Furthermore, we will briefly cover interesting AI applications and games released in 2019 that you can try today, as well as a few open dataset resources for machine learning projects. 

Autonomous Vehicles

2019 was an eventful year for Tesla, but the company had its fair share of mishaps. Most notably, in May of 2019, a tragic accident involving a Tesla Model 3 vehicle ended in the death of the driver. The accident occurred while the car’s autopilot function was engaged. The Tesla slammed into a stationary truck, failing to make any evasive maneuvers. This incident led to doubts about the widespread use of autonomous vehicles and their safety on public roadways.  

Waymo is another large player in the autonomous vehicle industry who made headlines last year with their own self-driving car. Technically under Google’s parent company, Alphabet, Waymo sent an email to their ride-hailing app users. This email informed customers that their next Waymo trip might be completely autonomous, without a human driver behind the wheel. 


One of the biggest impacts in the world of Natural Language Processing (NLP) was the release of GPT2 1.5B in November of 2019. A text-generating neural network from Open AI, GPT2 made headlines around the world due to its amazing ability to generate natural-sounding text. Some writers have even been able to create entire articles using GPT2, garnering the attention of numerous machine learning influencers and well-known scientists. 

Open AI had released previous versions of the neural network in the past, but GPT2 1.5B is the strongest iteration yet. 

In this article Open AI explains their 5 major findings:

  1. People find output from GPT2 convincing
  2. The GPT2 neural network can be fine-tuned for misuse
  3. Detecting synthetic text is challenging
  4. There has been no strong evidence of misuse so far
  5. Standards must be created for studying bias

Synthetic Media

4. U.S. House of Representatives Hearing on the Dangers of Deepfakes

Deepfakes were one of the biggest machine learning topics of 2019. The unprecedented advancements in deepfakes has led to widespread misuse and public fear of the technology. Furthermore, to understand and prepare for all the threats posed by the technology, the US Intelligence Committee held an open hearing on deepfakes and AI in June of 2019. 

[embedded content]

This article summarizes the most important points raised by each speaker, the potential dangers of deepfakes, as well as solutions and countermeasures. 

Synthetic voices and audio are emerging industries that made leaps and bounds last year. Replica Studios is a synthetic voice company that generated a buzz in 2019, attracting the attention of data scientists, celebrities, and game development studios interested in using their software. Part of this virality was due to an impressive proof-of-concept video showcasing the synthetic voices of Sundar Pichai (Google CEO), Jeff Bezos (Amazon CEO), Arnold Schwarzenegger, Kevin Hart, Morgan Freeman, David Attenborough, Snoop Dogg, Ellen Degeneres, and even Geralt of Rivia (The Witcher).

[embedded content]

Impressively, Replica Studios is able to make a synthetic copy of any voice using just a few minutes of speech recordings. In an interview, Replica CEO Shreyas Nivas said the technology was at a point where “Synthetic voices are indistinguishable from real voices and can rival human performances.”


6. How Google is Leading the Quest for Data with Google Dataset Search

Access to training data is one of the blockers slowing the pace of AI progress today. With deep learning especially, many models require not thousands, but millions of data instances for training. As a result, many data scientists and students turn to dataset aggregators like Kaggle and rely on open data provided by the community. To help improve access to open data, Google released a search engine solely for publishing and downloading datasets. 

While Google Dataset Search was still in beta in 2019, Google announced on January 23rd that they have indexed nearly 25 million datasets and the search engine is officially out of beta.

Interesting AI Applications and Resources Released in 2019

Talk To Transformer – A user-friendly implementation of Open AI’s GPT2 1.5B that anyone can use. Simply type in a custom prompt, a heading for an article, or the first lyrics of a song and see what the text-generating neural network comes up with.
Google Dataset Search – As mentioned in article #6 above, this is the free-to-use dataset search engine by Google. You can both search for open datasets and learn how to get your own resources crawled by the search engine. 
AI Dungeon 2 – A text adventure game that generates unique storylines with each decision you make. Powered by GPT2, this game can literally go anywhere and no two stories are ever the same. Check out an example of how this works here.Ultimate Dataset Aggregator – This dataset aggregator from Lionbridge AI includes hundreds of open datasets spanning dozens of use cases and subjects, including computer vision, parallel text, life sciences, finance, and more. This page is constantly updated as new datasets are curated.
AI is one of the world’s fastest growing industries, and there is surely more big machine learning news to come in 2020. I hope one of these AI articles sparked your interest. For more machine learning news and resources for open datasets, please subscribe to my Hacker Noon posts below and don’t forget to follow me on Twitter.
No tags for this post.

Related posts

Mapping Cybersecurity For The Distributed Web

Photo by Taskin Ashiq on Unsplash

The internet is the largest computer network in the world, today we use it across the globe to collect, transfer and process information through forms as diverse as data server warehouses, in-hand mobile devices or other connected devices. Due to its constantly changing size and shape, we are faced with issues around cyber attacks, database vulnerabilities and hardware defenses.

Traditionally, system maintainers used static cyber firewalls around a network perimeter to patch any discovered holes. This method is not without its demerits, evident in the great number of hacks, data leaks and privacy violations witnessed over the last couple decades. In this post we will review the new approaches that rely on knowing the cyber terrain within decentralized & distributed networks that form part of Web3.0.

We’ll dive deep into distributed data structures and cover the various security aspects of distributed networks that are crucial for effective cyber security standards implementation. These include (i) cryptographic key management, (ii) privacy protection mechanisms, (iii) critical infrastructure security and (iv) predictive cyber protection.

Cryptographic Key Management

The ideal architecture of modern cyber security is one built on the foundations of Trusted Computing and Zero Trust combined to provide high-quality data security. In this model, algorithmic controls are applied and verification performed ensuring data privacy and confidentiality that makes it simpler for users to operate. These architectural components include:

  • Key management: This is an essential component of a cryptographic access control system within a distributed or decentralized network. It manages the secret keys assigned to network entities in such a way that only authorized users can access particular resources.

    The important objective of key management in a network with multiple nodes is to restrict access of confidential data to authorized users verified using each nodes key. Cryptographic algorithms are continually being improved to perfect functions like granting/revoking access, data restructuring in case of user/node revocation or deletion.

  • Key Tampering: While strides have been made to ensure consensus algorithms are resilient to real world attacks, protocol architects need to constantly design new implementation models of cryptographic schemes that can capture potential attacks for example the deployment of encryption in detecting and deterring tampering attacks.

    The Hardened Enterprise Security Platform for example deploys a security encryption framework that’s designed to comprehensively secure aspects and node endpoints of a networks’ core cryptographic infrastructure, key management, secure data storage and more.

Networks that are incorporating these architectural features include Bitcoin, IPFS, and other major blockchain networks.

Privacy Protection Mechanisms

Although we have briefly covered key management and its privacy enhancing qualities, networks like the Hyperledger Fabric developed more frameworks such as certificate authority, channels and private data collection to improve privacy protection mechanisms.

Explored further, the privacy protection frameworks of the Fabric Network comprise of the following aspects:

  • asymmetric cryptography to separate transaction data from on-chain records
  • digital certificate management service
  • multi-channels that separates information between different channels
  • privacy data collection further satisfies the need for the isolation of privacy data between different organizations within the same channel

In all distributed networks, these frameworks produce data that is tamper-proof, traceable and trustworthy. This nature of the technology is expected to be the cornerstone of Web 3.0. Yet despite these underlying capabilities, cyber security standards and controls must be followed within other technical infrastructure linked to the distributed network to protect them from outside attacks.

Critical Infrastructure Security

The adoption rate of serverless infrastructure has been on the increase in recent years and to date, billions have been and continue to be invested in the development and support of these infrastructure. Given that serverless computing is a relatively new technology, its unique security risks have been a challenge to understand and manage.

  • Software vulnerabilities: Since this technology started shipping in millions of devices globally, standards have been created and solutions are emerging to tackle cyber attacks that target critical consumer and business physical systems, such as smart connected devices. Unlike centralized networks which are vulnerable to DoS & DDos attacks, distributed networks are prone to scalability and code vulnerability attacks that result in extensive financial burdens.
  • Hardware vulnerabilities: In spite of the increasing sophistication of software attacks, distributed networks like Bitcoin have to contend with evolving hardware vulnerabilities through unsafe hardware architecture which can be used to extract private keys or complex production processes that could result in disgruntled or malicious chip designers implanting malicious logic or circuits without being noticed.

Some of the measures network maintainers can take to mitigate risks associated with database changes, code modifications and cloud storage events include the performance of regular code audits to address exploitable security vulnerabilitiesusing CI/CD to mitigate bugs or code vulnerabilities and leveraging tools to increase visibility and efficiency of attack indicators. 

Predictive Cyber Protection

The evolving complex nature of cyber attacks on distributed networks has called for development of predictive cyber defense beyond baking hack resistance directly into hardware. Several solutions implemented on various networks automatically generate, deploy and manage secure configurations of components and sub protocols for use in these networks.

Machine learning solutions and AI tools are also used to develop integrated systems, to transform data to signals, relevant to predicting network attacks. Although its yet to be seen which solutions will stand out in this industry, it can be argued that careful implementation of other cyber security components, like key management, privacy protection mechanisms and infrastructure security, on any distributed network will minimize system vulnerabilities.


The Integration of Trusted Computing standards, such as decentralized DNS systems or distributed nodes, to guide users, processes or technology has produced system neutral solutions that address the challenge of distributed security. These are the key points to consider while developing an effective cyber security strategy for a distributed or decentralized web.

No tags for this post.

Related posts

Trailer – Vue.js: The Documentary

The people at Honeypot have been working on a new Vue.js documentary and today they launched the trailer which features Taylor and scenes from Laracon.

[embedded content]

What began as a side project of a Google intern now shares the JS leaderboard with #Reactjs and #Angularjs… Evan You tells the story of how he fought against the odds to bring #Vuejs to life.

The full documentary will be coming Feb 24, 2020!

Filed in: News

Related posts

Tagged :

Data Can Help You: How Technologies Fight Mental Health Issues

Medical technologies are not limited to remote examinations, robotic surgical controllers and diagnostic algorithms. Today they transform mental health domain, specifically, work methods with patients and the doctor’s role.

Mental health topic is hot as ever

The problem of mental health is not new, however, its specifics and the comparative study novelty, it lacks attention. People around the world are better aware of “traditional” diseases and, even if they can be easily treated, they still raise more concerns. Society stigmatizes mental diseases.

According to World Health Organization report, every fourth person globally is exposed to psychological and mental issues at least once in a lifetime.
About 900 million people are suffering from mental diseases today, and two-thirds make no attempts to seek qualified help. On average, there are 9 mental health professionals per 100,000 people in the world. In developing countries this indicator is less than 1, while in developed countries it sometimes surpasses 72.
The United States is the most studied market in terms of mental health. And this same country was the closest to technological breakthroughs in treatment area. According to the National Institute of Mental Health report, one in five adult Americans (or 46.6 million people) suffers from all sorts of psychological and mental illnesses.

Who got the worst of it? Millennials and buzzers. Among young people aged 18–25 years, one in four or 25.8% have mental problems. People over 50 years old suffer least of all (13.8%). Overall, out of 46.6 million patients, only 42% sought any medical help.

This is more than a medical problem. In the capitalist States, researchers have studied the impact mental problems have on the market. US economy loses $51 billion due to declining workforce productivity. Major mental illnesses (those complicating the fulfillment of key life and social functions) bring another $193.2 billion of lost profits per year to the economy. And in the UK, due to psychological and psychological issues, people take about 70 million additional sick days each year.
The total losses from mental problems for the global economy in the next 20 years may exceed $16 trillion. This is much more than the “price” of any non-infectious disease.

The scale of the situation resembles an epidemic. However, in the US, the solution to it is not treatment, but an interest surge in sedatives. The king of this market is Xanax. More specifically, a class of drugs called benzodiazepines on the basis of the active substance alprazolam. They briefly suppress anxiety and relieve fatigue. 

The number of Xanax prescriptions increased by 67% between 1996 and 2013, the consumption tripled by 2016. Xanax is now regularly taken by up to 5% of all adult Americans. The drug has low tolerance, and together with opioid drugs, it becomes deadly dangerous. As a result, there’s been an eight-fold increase in deaths from benzodiazepines overdose between 1999 and 2015. 

Technologies offer another perspective.

Apps and far beyond

Tech is not about digital hygiene or self-restraint. The treatment of mental problems from anxiety to clinical depression or bipolar disorder always rests on resources. Therapy is inaccessible to many, and even more scary because of stigma. But even short-term sessions with a specialist, are often not enough due to the nature of the disease.

Mobile applications can turn things around. Startups, academic institutes, research labs are developing programs that use active patient data collection to diagnose the condition in real-time. These applications capture dangerous patterns and stabilize the patient through short-term interventions.

  • CrossCheck is a solution brought by Dartmouth Psychiatric Research Center. This is a scientific program for the treatment of schizophrenia, that had been tested over a year. The smartphone analyzes user behavior (launched applications, calls, SMS) and uses sensors (camera, GPS, microphone, accelerometer). The user should regularly report by answering 10 questions about own condition. The data is sent to the server for analysis and further issuing of a clear medical report. Collected data allows for timely response to behavioral changes, and sometimes even prevention of suicidal attempts.
  • Companion is an application from a Boston-based Cogito startup that helps identify potential mental issues: from social isolation to anxiety or psychological trauma. This program went through trial by combat: during testing in 2013, about 100 of its users ended up in the Boston attack area. The team was able to accurately record the increase of symptom fixation. Companion does not make a diagnosis on its own. Having doctor’s diagnosis, the app can notify of the threat of relapse, as stated by MIT.
  • Mindstrong is a California-based company founded by the former head of the Mental Health Institute. The application of the same name works on the “digital siren” principle. The app and an alternative keyboard are installed on smartphones to monitor all patient’s activity. The focus is on borderline personality disorder. Patients are impulsive, they have anxiety and poor self-control. They are not aware of a stressful situation happening.
  • 7 Cups is also a Californian project that creates a network for quick patient care. The project employs and trains special operators for patient counseling. Today, there are 340,000 operators in 198 countries. If the situation is really serious, the patient is connected to a certified therapist. 90% of users feel better after an in-app session.

Software developers are not the only ones to occupy mental health niche. The deep tech domain (technological products based on scientific research) is far more wider. For example, the London-based Compass Pathways company intends to become the world’s first medical psilocybin provider and treat depression, anxiety and other conditions. 

According to Bloomberg, the idea is simple enough. Our brain, like any complex software, accumulates bugs over time due to incorrect blocks of code (in our case, individual neural connections). Substances such as psilocybin, taken in the right doses and under supervision, allows for brain reload. Where psychotherapy or serotonin stimulants fall short in battle with depression, this is a chance of survival.

The project was founded by a graduate of the St. Petersburg Medical Academy Ekaterina Malevskaya and her husband George Goldsmith. The company has already raised $58 million investment and received permission from US regulators to conduct full-fledged clinical trials. If the tests are successful, the products may be approved for official medical use.  

To whom it may concern

Mental health is highly relevant for mobile developers. According to The New York Times, about 10,000 applications operate in the niche. But the majority of them does not rely on a serious scientific base, but rather streamlined wording about awareness and psychological peace. This drew attention of the University of Liverpool academics. 
Scientists examined the list of apps the National Health Service of England recommends for depression. Only 4 of them provide comparatively reliable evidence of effectiveness, and only 2 actually meet scientific standards. The utility of the remaining 85% of apps cannot be proved. They mainly rely on mindfulness practices: breathing exercises and meditation which is not yet considered a clinically proven and effective treatment for depression or anxiety. It might help to cope with symptoms, but not more than that.  

Real scientific developments are more complex: CrossCheck has not yet completed the testing phase, Companion has been in it for 5 years. In turn, Mindstrong and 7 Cups work directly with the Californian authorities in a five-year program, which has received over $100-million funding, and endures ongoing medical and financial audits. Regulators led by the mighty FDA (US Food and Drug Administration) do not just confirm the “clinically effective” status. Especially when it comes to personal electronics.

Progress is inevitable. According to insiders in the US medical field, the FDA is preparing to announce that applications will soon become part of medical protocols for determining mental illness. Recently, FDA launched Digital Health Innovation Action Plan providing regulatory framework for app developers. World Health Organization as well as UK National Institute for Health and Care Excellence issued their guidelines on digital health technologies. Simply put, they will move from extravagant tests to reality. Simplifying mental assistance can save lives.

There is a problem, there is a market. According to CB Insights, an estimated $89 billion is spent on mental health treatment in the United States annually. This, for example, is almost twice as much as the cost pregnancy and childbirth, even considering that over 50% do not turn for help.
Investments are ballooning. During 2013–2017, about $0.6 billion was invested in mental health industry. In the second quarter of 2019 alone, the investment amount reached $321 million. A ‘boom in VC funding’, as being put by Pitchbook. 

Startups in this field have visited Y Combinator, Techstars and other accelerators. Investors fall for a clear and healthy business model. For example, the Talkspace application allows users to chat for $32 per week, and for $99 per week to have a video chat with therapists. SoftBank was among those who supported the project in $28 million round. project with similar functionality, but a more personalized approach, provides access for $129 — $350 per month. The project raised $28 million.

Health and digital medicine are one of the hottest niches where you can hear the battle cry of tech giants like Apple and Google. Mental health, which is starting to be taken more seriously, looks like a particularly understandable and profitable area. There is proven potential for the use of smartphones, AI, digital communication. FDA approval will make things completely clear. Hundreds of millions of people will have a chance for therapy or even recovery. Smartphones will no longer be source of problems, but become their solution.

To sum up, in the foreseeable future, we should expect the following trends:

  • The boom of preventive measures where technologies will play a pivotal role. 
  • The diagnosis apps will work on the basis of scientific developments, and dataset will substantially boost their accuracy. 
  • A vast array of apps will be prescribed to patients ranging from VR applications for PTSD to games for treating depression. 
  • The prominent market players will increasingly partner with insurance companies. 
  • The new regulation is likely to come after the presidential elections. Warren, Murray and Smith, US senators, have been dealing with the PreCert program, aiming to reinvent medical services regulation by shifting the focus from products to developers. 
  • Much effort will be made to address data privacy issue and ensure due protection of sensitive information. 
  • Since the retention rates in mental health leave much to be desired, mental health applications will extensively rely on peer support techniques, which eventually will become a success factor and an enabler for gaining an upper hand.
  • Yuri Filipchuk, partner at CYFRD investment firm, explains what this means for investors, entrepreneurs and all of us.
No tags for this post.

Related posts

A Year in Review of My Developer-Focused SAAS

This is the first annual review of Snipline. The shell command bookmarker app.

Snipline started as a tool to scratch my own itch. I wanted an app that streamlined saving, searching and using shell commands. Not long after the initial build I released it to the public as a Software as a Service.

One of the initial builds of Snipline

Notable Product Updates

Since its release last February, there have been over 20 updates (including bugfix releases) and 8 feature releases. Notable new features include:

  • Better markdown documentation.
  • Multi-select parameters for quickly choosing between predefined choices.
  • Random password parameter.
  • Pinning snippets.
  • Dark mode.
  • Importing/exporting snippets.
  • Advanced search syntax.
  • Tags.

There’s also been two sister-apps that I’ve added to the ecosystem: Snipline CLI and Snipbar for MacOS. SniplineCLI is a command-line interface which syncs with your Snipline account, and Snipbar for MacOS is a native Mac menu-bar app which is smaller and faster than its Desktop counterpart.

Snipline CLI in all its terminal glory

So overall, it’s been a busy year for Snipline. It’s given me the opportunity to work with many languages, frameworks, and libraries which I probably wouldn’t have otherwise. Snipline has been a playground for me to experiment and hone skills which I normally would not be able to in my day job.


Getting the Pricing Right

The majority of developers that I’ve come across do not wish to pay $18/year for Snipline. This is a failing on my part to not understand an audience in which I consider myself to be part of. I have asked myself if this was someone else’s product would I pay $18/year for it? I’d like to think I would, but I think perhaps my bias is leaking. More importantly, I can see why others would not.

Focusing on the Wrong Things

I have focused on product updates, new features, complimentary apps, and onboarding. I believe all of these are important, but most important of all is marketing.

So far, I have mostly used Twitter and blog posts for spreading Snipline. These blog posts have mostly been tutorials that have come about from technical problems I’ve solved while working on the apps (e.g. using Ember JS with Electron). While many people have found them useful, I do not think they’ve helped attract developers to the product.

Stack Choices

It’s possible that for most developers I chose the wrong stack. I created a cloud based web-app because I personally needed to sync between multiple computers, including various Operating Systems and Apple IDs. Snipline solves this problem for me perfectly, but for many developers syncing to the cloud isn’t necessary, and they’d much rather have either a one-time cost with no syncing or iCloud syncing.

What I’ve Done Well


Throughout last year, I’ve worked hard on making onboarding as smooth as possible. Including detailed documentation, adding onboarding steps to the registration, and just recently added new emails with tips to keep trial accounts engaged.

One IndieHacker user kindly created an in depth video of Snipline. The fact that someone would take the time to make a video showing off Snipline felt incredibly rewarding. Perhaps unintentionally, he also gave me some fantastic insight into how an end-user learns to use Snipline. He brings up one of the onboarding emails in the video which lists example use-cases which is great to see that they’re working, but also shows me that I should tweak these examples with ones that concentrate on learning Snipline rather than showing off overly complicated shell commands.


One of the things many users mention is how nice the UI is. I’d love to take all the credit for this but I had help from some very talented designers. I think many developers including myself are used to dealing with clunky, non-intuitive UI in apps that we use. But since Snipline is a paid product, I don’t think a bare-bones UI will cut it if I want it to stand out from free alternatives.


Snipline has been receiving regular updates throughout the past 12 months. These are usually small and incremental. I believe small changes add up over time and allow for gradual transitions. Large updates risk rocking the boat – the last thing I want to do is alienate the current user base by breaking their workflow.

What’s next for Snipline?

New Pricing Plans

Pricing is a delicate point for Snipline. It’s clear that many of the people that have tried Snipline do not get the value they wish out of the app compared to cheaper alternatives. At the same time, I do not want to make Snipline free because continued development and support needs to be sustainable. I also do not wish to inject ads or tracking as an alternative.

With this in mind, I really like Source Hut’s “Pay what you can” structure. So, as of today, I’m adding 2 new plans: $9 Lite and $27 Pro.

I encourage users to subscribe to the plan that works with their financial situation and the value that they get out of Snipline. I still recommend then $18 plan, however, this is now only a suggestion as each plan gets the same benefits.


The next release of Snipline Desktop is right around the corner with a couple new features and bug fixes. These include:

  • New “Snippet” page for viewing snippets and documentation.
  • Better offline support.
  • Fix for initial lag in the search bar.
  • Fix searching for tags and aliases.

Beyond 0.10.0

One of the key features I’d like to get added to Snipline is client-side encryption. This is not a small task for a few reasons: I have to consider how to transition currently stored snippets, how to make users aware of this feature – its benefits and risks. I also have to integrate this with Snipline CLI and Snipbar.

In addition to this I’d really like to get an iOS and iPadOS app released this year, as well as upgrade Ember JS/Electron for the Desktop app to the latest versions.

Publishing an API for developers to leverage has also been on my mind. The API has been stable for a while now, but this may change with client-side encryption so I intend to get that released first.

That’s all for now, thanks for reading!

Previously published at

No tags for this post.

Related posts

Why Engineering Can’t Be The Only Team Responsible for Keeping IoT Devices Online

Photo by Dmitriy Frantsev on Unsplash

You’re a scooter-sharing start-up who just raised $100 million from Seq-reessen-losa Ventures. Cool. It’s time to drop those 50,000 scooters you just unloaded from container ships fresh from Shenzhen. You have 100 cities on your todo list. What’s your plan to keep track of them once you’ve unleashed them on millions of unsuspecting citizens?

Lucky for you, every scooter has a SIM card and an LTE data plan so your engineering team can keep track of them no matter where people scoot, right?

Not quite.

For a successful launch, every micro-mobility company needs to keep their devices available 24/7 for maximum vehicle utilization. A big part of this is keeping them all online and connected, allowing users to ride and your team to know where to jump in to fix any problems that may occur.

IoT connectivity is the essential piece of the micro-mobility IoT stack. Without it, data would simply live on the devices and never yield insights. 

Yet many organizations we work with don’t have a dedicated connectivity department. Because connectivity cuts across many areas, some companies struggle to decide who’s responsible. 

As IoT evolves and expands, perhaps more organizations will assign a dedicated connectivity team—but for many companies, that’s not immediately possible due to resource constraints. In some larger enterprises, organizations may enlist an existing relationship with a carrier to manage their connectivity externally. But for organizations newer to IoT, handling connectivity can be a voyage into uncharted waters. 

Making Assumptions about IoT Connectivity

In many of today’s IoT companies, a lot of the internal experience comes from the software side. These software companies are concerned about uptime and service levels, but actual connectivity to the cloud internet is often a baked-in assumption. There are servers, VPN links, and integration protocols on their internal block diagrams, but chances are there aren’t required blocks for underlying connectivity. For IoT, a device’s connection to the internet—and in some cases, to other devices on the network—is a critical link that must be intentionally designed along with  the software infrastructure.

How Can Your Organization Foster a Better Understanding of IoT Connectivity?

First off, communicate the message that every department plays a role in connectivity, especially with cellular. It’s not just the engineering team making sure devices stay connected. Everyone—from the finance department to the product team—needs to understand the basics of how it works and the value it provides to the business.

Here’s how connectivity responsibilities reach into various departments—and how they evolve during the product’s lifecycle:


Engineers are the most obvious stewards of connectivity. They’re often the first team members responsible for an organization’s IoT connectivity decisions, especially when a team is still developing their IoT product. Engineers provide technical recommendations and research for connectivity options, whether that’s Bluetooth, Wi-Fi, cellular, or LoRa. If cellular is the right choice, there are additional decisions to make around the specific technology—which module category and connection protocol are best suited for the use case?

How much data will be sent, and how frequently? Do the devices require persistent network connections or just occasional check-ins? How long does the battery need to last? How will antenna selection affect signal performance?


The product team helps an organization interpret  the device data that connectivity provides from the field. With that goal in mind, they need  to understand the deployment’s data connectivity needs. If the project requires high bandwidth connection with a Category 6 4G LTE modem and 500 GB of data per month, it falls to the product team to dictate those requirements and determine the capabilities of the organization’s connectivity partner. Other responsibilities could include validating carrier partner API capabilities, creating device launch plans, and assessing radio access and coverage requirements.

The product team also determines how the organization will handle customer lifecycle and billing in regards to connectivity. Will the cost of connectivity  be integrated into an existing monthly service fee model? Will it be an up-front fee based on the product’s life cycle? Or will the organization pass connectivity costs directly to the customer for each billing cycle? 


For the finance team, IoT connectivity must be understood as a value driver, not just a cost line item. In the pre-launch stage, the finance department receives the connectivity requirements from the product team and determines how the expected costs align with the organization’s business model. How will those costs affect hardware payback or product lifecycle ROI? The finance team should consider additional value from connectivity platform providers through redundant coverage, dashboard collaboration and analytics, and API availability. 

Working with connectivity providers with these features will reduce up-front integration costs as well as future operational overhead. Once budget expectations are established and the project is launched, the finance team should continue to monitor data usage and costs, to adjust ongoing budget expectations and gain a clear picture of connectivity investment.


For the operations team, working SIM cards into the supply chain may be their biggest connectivity-related responsibility. To achieve that, they coordinate with their contract manufacturer and IoT SIM vendor to keep up with purchasing and manufacturing timelines—making sure parts arrive on time, in the right format, and in the right quantity. They also facilitate the initial device provisioning process into internal systems as units get tested, ship, and come online.

As part of this, operations will likely have the best perspective on how to link a provider’s SIM card identifiers such as ICCID to device IMEI or the organization’s internal device identifier. After launch, the operations team continues to monitor the inventory of available SIMs and maintains an ongoing conversation with the connectivity vendor on potential new markets, forecasts, and deployments


Once a device is launched in the field, the customer success team takes the lead on understanding connectivity problems and how to deal with them. They should familiarize themselves with the tools, dashboards, and applications available through the connectivity vendor and leverage that data along with their own internal metrics.

The customer success team also needs to incorporate connectivity in their technical support. When a problem arises, they should be checking for connectivity issues in addition to device firmware or internal server problems.

So, Who’s Really Responsible For Connectivity?

Essentially, the answer is “everyone.”

The entire organization needs to understand the value of connectivity and have access to the connectivity platform. Teams should communicate concerns, expectations, ideas, and roadblocks to each other. Collaboration is essential to making this work.

Enabling Connectivity Collaboration for the New IoT Team

As engineers and product managers prepare for a connected product launch, make sure all of your teams have access to the data they need.

For example,

If an engineer wants to check a single device’s connection status and recent history, they should be able to open their IoT dashboard and find that data.

If the finance department needs to review pricing and spend, they can login through the portal and determine an ROI.

The operations team can access tools to manage their SIM lifecycle and ensure devices are working before they leave Shenzhen.

And your customer success department can view live connection status of for every device and can tie support tickets back to scooters that might need your help.

This is why we’ve built an IoT platform to work for every team and everyone – all based on our observations from working with thousands of clients implementing their IoT projects.

Ultimately, our goal is to simplify connectivity, make it accessible throughout your organization, and make sure Seq-reessen-losa Ventures’ $100 million dollar investment is well spent.

Previously published at

No tags for this post.

Related posts

BrandPost: A SASE Crash Course

2020! What could better motivate you to push ahead with your resolutions and organization’s digital transformation than a new year AND a new decade. As you put together your digital strategy, check out a new transformation-empowering (and transformational) technology category Gartner coined the Secure Access Service Edge or SASE (pronounced “Sassy”). SASE converges wide area networking and identity-based security into a cloud service targeted directly to your branch offices, mobile users, cloud services, and even IoT devices, wherever they happen to be. The result: consistently high WAN performance, security, productivity, agility, and flexibility across the global, mobile, cloud-enabled enterprise.

To jumpstart your research into one of the few networking categories Gartner has labelled “transformational,” we’ve put together a very workable SASE crash course and reading list. Each lesson helps you dig a little deeper into SASE, so you can develop a good grasp of its components and transformational potential.

Lesson 1: SASE as Defined by Gartner

So, what is SASE exactly and why should you care? SASE was coined by Gartner analysts Neil McDonald and Joe Skorupa in a July 29, 2019 Networking Hype Cycle Market Trends Report, How to Win as WAN Edge and Security Converge into the Secure Access Service Edge and an August 30, 2019 Gartner report, The Future of Network Security is in the Cloud. If you don’t have access to these reports, Cato quotes the highlights of the former word for word in this short blog: The Secure Access Service Edge (SASE), as Described in Gartner’s Hype Cycle for Enterprise Networking, 2019. It’s a great place to get started on exactly what Gartner has to say about SASE and its drivers, likely development, and place in the digitally transforming enterprise. There are also some valuable links to more information on SASE and exactly how the Cato cloud fits into the SASE trend.

Lesson 2: What SASE Is and What It Isn’t

After Gartner piques your interest, get some valuable insight from Cato in this blog: The Secure Access Service Edge (SASE): Here’s Where Your Digital Business Network Starts. Here you can learn why convergence of wide area networking and security is absolutely vital for the agile, digitally transforming enterprise and why legacy data center-centric solutions can’t deliver any more in a world of user mobility and the cloud. This blog breaks down the four essential attributes of SASE—identity driven, cloud native, support for all edges, and globally distributed—in detail. It also explains why SASE is not anything like telco-managed services and summarizes how Cato delivers SASE effectively.

Lesson 3: How Cato Delivers SASE

Sometimes visual/audio-based learning can bring things into better focus than straight text, and few people are better at explaining WAN and security concepts than Yishay Yovel, Cato Network’s Chief Marketing Officer. In this short, 17-minute video presentation, Intro to SASE by Yishay, Yishay digs into Gartner’s take on SASE, why WAN and security need to converge, and why SASE is one of only three (out of 29) Networking Hype Cycle categories that Gartner has labeled “transformational.” Yishay gets into a lot of nitty-gritty SASE details and offers valuable perspective on how Cato Networks delivers a complete cloud-native SASE software stack that supports all edges and is identity-driven, scalable, flexible, and easy to deploy and manage. Yishay also explains clearly why some of the other WAN and security solutions out there don’t fulfill some essential requirements of SASE, such as processing traffic close to the source. For visual learners, there are also some great architectural diagrams.

Lesson 4: Gartner Webinar Breaks Down SASE and its Implications

You’ve heard it from Yishay, now hear it from Gartner’s VP Distinguished Analyst Neil MacDonald and Yishay in this 37-minute Gartner Webinar: Is SASE the Future of SD-WAN and Network Security? MacDonald explains SASE elements and drivers in depth, why SASE belongs in the cloud, how enterprises will adopt SASE, and how organizations should evaluate SASE offerings. There’s some good detail here on how SASE works in different contexts and scenarios, such as a mobile employee connecting to Salesforce securely from the airport, a contractor accessing a Web application from an unmanaged device, and even wind turbines collecting and aggregating data and sending it to the cloud for processing. Neil digs into core SASE requirements and recommends additional services and some other useful options. Yishay then takes over with why Cato is the world’s first true SASE platform.

Lesson 5: The White Paper

But wait, there’s more. Here’s a clear and concise white paper from Cato, The Network for the Digital Business Starts with the Secure Access Service Edge. This is a good piece to give out to the other digital transformation stakeholders in your business if you want them to get up to speed on SASE fast. It’s a quick read that explains why the digital, mobile, cloud-enabled business needs a new converged network/security model. It also covers the four elements of SASE, core SASE capabilities, SASE benefits, and clear examples of what SASE isn’t and why. It describes the features that make Cato one of the most comprehensive SASE offerings on the market. It’s a clear, concise presentation broken into short paragraphs and bullet points to provide a fast introduction to SASE and the Cato Cloud.

Lesson 6: Icing on the Cake: The Short and Sweet Video

SASE (Secure Access Service Edge) is a short YouTube video to go along with the white paper, combining perspective and information from Gartner and Cato on why you need SASE simplicity for your digital transforming business.

We hope you have a happy, healthy, transforming New Year. To accelerate your organization’s digital transformation over the next decade, get up to speed on SASE with these useful blogs, videos, and white papers and find out how SASE can help you make that transformation happen quickly and more easily.

No tags for this post.

Related posts

MATE Desktop 1.24 Release Arrives with a Slew of Improvements

A new version of the MATE desktop has been released and in this post —yes the one you’re reading right now— I recap some of the changes being offered in the MATE 1.24 release.

Not friendly with this particular desktop environment? The MATE desktop was conceived as a direct continuation of the “old” GNOME 2 codebase but, rather like the Cinnamon desktop, has long since matured into its own distinct thing.

The MATE desktop sits at the heart of many Linux distros, including Ubuntu MATE. And is particularly popular with those who prefer a traditional ‘2 panel’ desktop experience with simple app menus, feature-filled apps, and fewer flashy effects.

New MATE 1.24 Features

Among the MATE 1.24 features, changes and improvements the MATE desktop team highlight specifically is a new Date & Time app, a new MATE Disk Image Mounter utility, and a ‘Do Not Disturb’ setting in the notification system.

Though conceived as a continuation of the GNOME 2 codebase, the MATE desktop has long since matured into its own distinct thing

The versatile MATE Panel improves its support for Wayland and HiDPI screens (including the Wanda the fish applet), while the window list applet is now able to show window thumbnails on hover (should you want it to).

Elsewhere, the Engrampa archive manager now supports a couple of additional package formats, while the Eye of MATE image viewer adds support for Wayland, .webp files, and embedded color profiles.

Marco is the MATE desktop window manager and, in this update, gifts users a variety of older window decorations, adds invisible resize borders, and renders window control buttons pixel-perfect on HiDPI screens.

Elsewhere the Alt + Tab and Workspace switcher popups have been reworked into OSD-style. They also respond to keyboard arrow keys too — a small, but welcome adjustment.

Other miscellaneous changes include:

  • System Monitor panel applet supports NVMe drives
  • Various HiDPI improvements
  • Mouse app supports acceleration profiles
  • Menu editor now supports Undo and Redo

Pretty impressive stuff — but some of these changes might seem familiar. And that’s because a lot of the big shiny things listed above have been back-ported to MATE 1.22, which is available in Ubuntu MATE 19.10.

Upgrade to MATE Desktop 1.24

Wondering how can you upgrade to MATE 1.24 on your system? Well, if you’re particularly bored, you can download the source code and compile it by hand.

Otherwise the answer will depend on which Linux distribution you’re using.

If you’re on Ubuntu MATE (any stable version) then you won’t be able to upgrade to MATE 1.24 but, as mentioned above, you don’t need to: many of these changes have already been backported.

Otherwise, Ubuntu MATE 20.04 LTS arrives this April and will, barring any unforeseen circumstances like an alien invasion, feature MATE 1.24.

Arch, Manjaro, and other “rolling release” Linux distros will likely provide Mate 1.24 packages to their users as soon as humanely possible, so if you’re running one of those then do keep an eye out!

Related posts

Tagged :

InfoWorld Technology of the Year Awards promotional information

Congratulations! To help you promote your win, InfoWorld has supplied the following PR information and usage guidelines.

Note that companies named as an InfoWorld Technology of the Year Award winner may be referenced in press outreach to publicize the awards and InfoWorld’s content about the awards. InfoWorld will not disclose proprietary corporate information, but it may highlight information included by your organization in the original InfoWorld Technology of the Year Awards content.

If you would like a physical award, or print or electronic reprints, you can place an order through the YGS Group via phone at (800) 290-5460 x129 or via email at [email protected]. Note that YGS solely determines the cost of the awards’ production.

InfoWorld logo usage and guidelines

As an InfoWorld Technology of the Year winner, you have the opportunity to purchase a license for the rights to use the Technology of the Year logo. Please contact the YGS Group via phone at (800) 290-5460 x129 or via email at [email protected].

The InfoWorld Technology of the Year brand is a valuable asset that International Data Group needs to protect. We ask that you help us by properly using the logo in accordance with our guidelines listed below. Accordingly, we ask that your business partners, customers, and other third parties adhere to the guidelines listed below.

Parties given permission to use the InfoWorld Technology of the Year logotype must adhere to the following rules:

  • The logotype may not be altered in any manner, including size, proportions, colors, elements, type, or in any other respect. You may not animate, morph, or otherwise distort its perspective or dimensional appearance.
  • The logotype may not be combined with any other graphic or textural elements and may not be used as a design element of any other logo or trademark.
  • The logotype must be separated from your company name and product names by the space of one logo width or one inch, whichever is greatest.

If you are unsure if your usage is within these guidelines, please email Stacey Raap in IDG Communications Marketing.

PR opportunities

Please read the following guidelines carefully before preparing a press release that references InfoWorld and the InfoWorld Technology of the Year rankings.

Obtaining a quote from InfoWorld

If your company wants to include a quote attributed to an InfoWorld spokesperson, please use the following quote to reinforce InfoWorld Technology of the Year key messages. Modified versions of this quote are subject to approval from InfoWorld. All quotes should be attributed to Doug Dineley, Executive Editor, InfoWorld.

“If digital transformation means anything, it means taking advantage of the latest advances in software development, cloud computing, data analytics, and AI to improve your business,” said Doug Dineley, executive editor of InfoWorld. “Our 2020 Technology of the Year Award winners are the platforms and tools that the most innovative companies are using to tap the power of data, streamline business processes, and respond more quickly to customers and new business opportunities.” 

References to InfoWorld and the InfoWorld Technology of the Year Awards

All communications involving InfoWorld and the InfoWorld Technology of the Year Awards must be consistent with the style and content listed below:

  • The full feature name is the InfoWorld Technology of the Year Awards.
  • InfoWorld is always one word, and the “w” is capitalized.
  • On first reference, list the publication as “IDG’s InfoWorld.” InfoWorld as a stand-alone name may be used after the first reference.
  • If a subsidiary of an organization is included in the rankings, press releases or marketing material should specify that the unit—not the parent organization—received the award. If the parent unit is cited, the name of the subsidiary unit should be more prominent in placement, size, and usage than that of the parent unit.

The following approved InfoWorld corporate boilerplate and short InfoWorld Technology of the Year Awards description may be used, where appropriate, in press releases referencing inclusion in the InfoWorld Technology of the Year feature.

Description of InfoWorld Technology of the Year Awards

Selected by InfoWorld editors and reviewers, the annual awards identify the best and most innovative products on the IT landscape. Winners are drawn from products tested during the past year, with the final selections made by InfoWorld’s Reviews staff. 

Corporate boilerplate

About InfoWorld  InfoWorld is the leading resource for content and tools on modernizing enterprise IT. Our editors and writers provide first-hand experience from testing, deploying, and managing implementation of emerging enterprise technologies. InfoWorld’s website ( and custom solutions provide a deep dive into specific technologies to help IT decision-makers excel in their roles and provide opportunities for IT vendors to reach this audience. InfoWorld is published by IDG Communications, a subsidiary of International Data Group (IDG), the world’s leading media, events, and research company. Company information is available at

About IDG Communications  IDG Communications, an International Data Group (IDG) company, brings together the leading editorial brands (CIO, Computerworld, CSO, InfoWorld, JavaWorld, and Network World) to serve the information needs of our technology and security-focused audiences. As the premier high-tech B2B media company, we leverage the strengths of our premium owned and operated brands, while simultaneously harnessing their collective reach and audience affinity. We provide market leadership and converged marketing solutions for our customers to engage IT and security decision-makers across our portfolio of award-winning websites, events, magazines, products, and services. Company information is available at

No tags for this post.

Related posts

Swift language targets machine learning

Moving toward Swift 6, the core development team behind Apple’s Swift programming language has set priorities including refining the language for use in machine learning.

Ambitions in the machine learning space are part of plans to invest in “user-empowering directions” for the language. Apple is not the only company with machine learning ambitions for Swift; Google has integrated Swift with the TensorFlow machine learning library in a project called Swift for TensorFlow. And the Swift community has created Swift Numerics, a library that can be used for machine learning.

In addition to machine learning, directions eyed for Swift include building APIs such as variadic generics and DSL capabilities such as function builders. Solutions for major language features such as memory ownership and concurrency also are part of the plan. Other specific goals for Swift, cited in a January 2020 bulletin, include:

  • Creating a “fantastic development experience,” with developers able to be highly productive and joyful when programming in the language. These investments include faster builds, better diagnostics, responsive code completion, and reliable debugging. Most current engineering work in the project covers these areas.
  • Growing the Swift software ecosystem, including expanding the number of supported platforms and improving how software written in Swift is deployed. Also planned is support for cross-platform tools such as Language Server Protocol, the Swift Package Manager, code formatting, and refactoring. Cultivation of a rich open source library ecosystem also is eyed.

Introduced in June 2014, Swift has been rising steadily in the Tiobe index of programming language popularity, jumping from 20th place a year ago to 10th place in the February 2020 index. Its predecessor, Objective-C, has done the reverse, dropping from 10th a year ago to 20th this month. The release currently in development is Swift 5.2. A succession of Swift 5.x releases are expected before Swift 6.

No tags for this post.

Related posts