The End of Agile? Not a Chance.

There’s been a fair amount of opining lately about the end of Agile, the 19-year-old movement that began in software development and has made its way through the workforce as an alternative to more traditional ways of working. People seem to be worried that a strategy that once was considered, lean, mean, and productive, has now become cultish, bloated, and ineffectual. But Agile continues to work, and it continues to work well — when implemented in a disciplined way. Read More

#devops

Operationalizing AI

When AI practitioners talk about taking their machine learning models and deploying them into real-world environments, they don’t call it deployment. Instead the term that’s used is “operationalizing”. This might be confusing for traditional IT operations managers and applications developers. Why don’t we deploy or put into production AI models? What does AI operationalization mean and how is it different from the typical application development and IT systems deployment? Read More

#devops

There’s No Such Thing As The Machine Learning Platform

In the past few years, you might have noticed the increasing pace at which vendors are rolling out “platforms” that serve the AI ecosystem, namely addressing data science and machine learning (ML) needs. The “Data Science Platform” and “Machine Learning Platform” are at the front lines of the battle for the mind share and wallets of data scientists, ML project managers, and others that manage AI projects and initiatives. If you’re a major technology vendor and you don’t have some sort of big play in the AI space, then you risk rapidly becoming irrelevant. But what exactly are these platforms and why is there such an intense market share grab going on? Read More

#devops, #frameworks, #strategy

How To Leverage Deep Learning For Automation Of Mobile Applications

Mobile applications have already made a mark on the digital front. With a large number of applications already on the Google Play Store and Apple Store. There are applications for almost everything today. But, as the markets of mobile apps expand, they face new challenges and obstacles to be overcome.

Deep Learning is a subsidiary technology for Artificial Intelligence. It uses algorithms to parse the data and provide deep insights into the applications and their issues. Often, time constraints and deadline pressures get the better of developers and do not allow the developers or higher management to test the app properly before the grand launch and here, deep learning can help automate the mobile application testing and deployment.

The interactions between the user and system are facilitated through the GUI(Graphic User Interface). Especially, an interaction may include clicking, scrolling, or inputting text into a GUI element, such as a button, an image, or a text block. An input generator can produce interactions for several tests, Read More

#devops

MLflow: an Open Source Machine Learning Platform

Everyone who has tried to do machine learning development knows that it is complex. Beyond the usual concerns in the software development, machine learning (ML) development comes with multiple new challenges. MLFlow is an open interface, open source machine learning platform, released by DataBricks in 2018, that can be used to create an internal ML platform for tracking, packaging, and deploying ML models.

Read More

#devops

Version Control ML Model

Machine Learning operations (let’s call it mlOps under the current buzzword pattern xxOps) are quite different from traditional software development operations (devOps). One of the reasons is that ML experiments demand large dataset and model artifact besides code (small plain file).

This post presents a solution to version control machine learning models with git and dvc (Data Version Control). Read More

#devops

State of the art result for all Machine Learning Problems

Github repository providing state-of-the-art (SoTA) results for machine learning problems. Links categorized as: Supervised Learning, Semi-Supervised Learning, Unsupervised Learning, Transfer Learning, and Reinforcement learning, with details pointing to Research Paper, Datasets, Metric, Source Code, and Year of publication. Read More

#devops

Why AI and Machine Learning will Redefine Software Testing

With the advent of DevOps and Continuous Delivery, businesses are now looking for real-time risk assessment throughout the various stages of the software delivery cycle.

Although Artificial Intelligence (AI) is not really new as a concept, applying AI techniques to software testing has started to become a reality just the past couple years. Down the line, AI is bound to become part of our day-to-day quality engineering process, however, prior to that, let us take a look at how AI can help us achieve our quality objectives. Read More

#devops

Deciding PaaS or SaaS for Building IoT Solutions in Microsoft Azure

Building out an IoT (Internet of Things) solution can be a difficult problem to solve. It sounds easy at first, you just connect a bunch of devices, sensors and such to the cloud. You write software to run on the IoT hardware and in the cloud, then connect the two to gather data / telemetry, communicate, and interoperate. Sounds easy, right? Well, it’s actually not as simple as it sounds. There are many things that can be difficult to implement correctly. The biggest problem area is Security, as it is in most other systems types as well. Then you can device management, cloud vs edge analytics, and many other aspects to a full IoT solution.

Traditionally you would need to build all this out yourself, however, with offerings from Microsoft there are a few options available for building out IoT solutions. The Azure IoT Suite offers PaaS (Platform as a Service) capabilities that are flexible for any scenario, while the newer Microsoft IoT Central is offering more managed SaaS (Software as a Service) capabilities to further assist in easing development, deployment and management. Read More

#devops, #iot

Model Cards for Model Reporting

Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related artificial intelligence technology, increasing transparency into how well artificial intelligence technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation. Read More

#devops, #explainability, #governance