Building out an IoT (Internet of Things) solution can be a difficult problem to solve. It sounds easy at first, you just connect a bunch of devices, sensors and such to the cloud. You write software to run on the IoT hardware and in the cloud, then connect the two to gather data / telemetry, communicate, and interoperate. Sounds easy, right? Well, it’s actually not as simple as it sounds. There are many things that can be difficult to implement correctly. The biggest problem area is Security, as it is in most other systems types as well. Then you can device management, cloud vs edge analytics, and many other aspects to a full IoT solution.
Traditionally you would need to build all this out yourself, however, with offerings from Microsoft there are a few options available for building out IoT solutions. The Azure IoT Suite offers PaaS (Platform as a Service) capabilities that are flexible for any scenario, while the newer Microsoft IoT Central is offering more managed SaaS (Software as a Service) capabilities to further assist in easing development, deployment and management. Read More
Tag Archives: DevOps
Model Cards for Model Reporting
Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related artificial intelligence technology, increasing transparency into how well artificial intelligence technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation. Read More
Datasheets for Datasets
The machine learning community currently has no standardized process for documenting datasets. To address this gap, we propose datasheets for datasets. In the electronics industry, every component, no matter how simple or complex, is accompanied with a datasheet that describes its operating characteristics, test results, recommended uses, and other information. By analogy, we propose that every dataset be accompanied with a datasheet that documents its motivation, composition, collection process, recommended uses, and so on. Datasheets for datasets will facilitate better communication between dataset creators and dataset consumers, and encourage the machine learning community to prioritize transparency and accountability. Read More
The AI Edge Engineer: Extending the power of CI/CD to Edge devices using containers
The Artificial Intelligence – Cloud and Edge implementations course explores the idea of extending CI/CD to Edge devices using containers. This post presents these ideas under the framework of the ‘AI Edge Engineer’. Note that the views presented are personal; comments from those exploring similar ideas – especially in academia / research — are being sought.
The post discusses models of development for AI Edge Engineering based on deploying containers to Edge devices which unifies the Cloud and the Edge. Read More
3 Examples of AI at Work in DataOps
Artificial intelligence (AI) is making all the difference between innovators and laggards in the global marketplace. Yet, implementing a state-of-the-art DataOps operation involves a long-term commitment to putting in place the right people, processes and tools that will deliver results.
In this post, we look at three organizations that are doing cutting-edge work in the field of DataOps. We look at the specific strategies they use and the results they’ve seen as they navigate the uncharted waters of DataOps. Read More
Microservices Observability (Part 1)
This is a demonstration of how to observe, trace, and monitor microservices on Java applications in an Openshift environment.
According to microservices architecture and modern systems design, there are 5 observability patterns that help us to achieve the best in terms of monitoring distributed systems. They are the foundation for all who want to build reliable cloud applications. This tutorial will dive into domain-oriented observability, monitoring, instrumentation and tracing in a business-centered approach with a practical view using open-source projects sustained by the cloud-native computing foundation (CNCF). Read More
TWiML Presents AI Platforms, Vol 2
Over the next few weeks on the podcast, we’re bringing you volume 2 of our AI Platforms series. You’ll recall that last fall we brought you AI Platforms Volume 1, featuring conversations with platform builders from Facebook, Airbnb, LinkedIn, Open AI, Shell and Comcast. This series turned out to be our most popular series of shows ever, and over 1,000 of you downloaded our first eBook on ML platforms, “Kubernetes for Machine Learning, Deep Learning & AI.” Well now it’s back, and we’re sharing more experiences of teams working to scale and industrialize data science and machine learning at their companies. Read More
TWiML Presents AI Platforms, Vol 1
As many of you know, part of my work involves understanding the way large companies are adopting machine learning, deep learning and AI. While it’s still fairly early in the game, we’re at a really interesting time for many companies. With the first wave of ML projects at early adopter enterprises starting to mature, many of them are asking themselves how can they scale up their ML efforts to support more projects and teams
Part of the answer to successfully scaling ML is supporting data scientists and machine learning engineers with modern processes, tooling and platforms. Now, if you’ve been following me or the podcast for a while, you know that this is one of the topics I really like to geek out on.
Well, I’m excited to announce that we’ll be exploring this topic in depth here on the podcast over the next several weeks. Read More
Container technologies promise more agility for big data apps
First developed to make applications easier to deploy, manage and scale, container technologies nonetheless have seen limited use in big data systems due to earlier struggles managing application state and data. But all that is beginning to change, promising more agility and flexibility for these systems.
Containers can be viewed as part of a continuum of infrastructure simplification situated between traditional monolithic infrastructure and serverless functions, said John Gray, CTO at Infiniti Consulting, an InterVision company. Compared to monolithic infrastructure deployments, serverless infrastructure could provide more agility and reduce costs in the short run, while greatly easing management tasks in the long run. Read More
MIT's new interactive machine learning prediction tool could give everyone AI superpowers
Soon, you might not need anything more specialized than a readily accessible touchscreen device and any existing data sets you have access to in order to build powerful prediction tools. A new experiment from MIT and Brown University researchers have added a capability to their ‘Northstar’ interactive data system that can “instantly generate machine-learning models” to use with their exiting data sets in order to generate useful predictions.
One example the researchers provide is that doctors could make use of the system to make predictions about the likelihood their patients have of contracting specific diseases based on their medial history. Or, they suggest, a business owner could use their historical sales data to develop more accurate forecasts, quickly and without a ton of manual analytics work. Read More