I’ve never met a developer that enjoys writing documentation. At the very least they understand the value of it and will begrudgingly write it, but will never enjoy the process of writing it.
Some people go by the philosophy that good code should document itself, but if this were true then why is that one person who is familiar with the entire codebase so valuable to a team? There is a lot of knowledge, reasoning, and context that cannot simply be deduced from raw code. Good documentation that’s well-maintained only adds value and context to a codebase.
… AI Doc Writer for Javascript, Typescript, Python, and PHP is a VS Code extension that generates documentation for you using AI. The way it works is that you select the code you want to document and you press the ‘Generate docs’ button or hit the keyboard shortcut Cmd/Ctrl + . Read More
Tag Archives: DevOps
Competitive programming with AlphaCode
Creating solutions to unforeseen problems is second nature in human intelligence – a result of critical thinking informed by experience. The machine learning community has made tremendous progress in generating and understanding textual data, but advances in problem solving remain limited to relatively simple maths and programming problems, or else retrieving and copying existing solutions. As part of DeepMind’s mission to solve intelligence, we created a system called AlphaCode that writes computer programs at a competitive level. AlphaCode achieved an estimated rank within the top 54% of participants in programming competitions by solving new problems that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding.
In our preprint, we detail AlphaCode, which uses transformer-based language models to generate code at an unprecedented scale, and then smartly filters to a small set of promising programs.
We validated our performance using competitions hosted on Codeforces, a popular platform which hosts regular competitions that attract tens of thousands of participants from around the world who come to test their coding skills. We selected for evaluation 10 recent contests, each newer than our training data. AlphaCode placed at about the level of the median competitor, marking the first time an AI code generation system has reached a competitive level of performance in programming competitions.
To help others build on our results, we’re releasing our dataset of competitive programming problems and solutions on GitHub, including extensive tests to ensure the programs that pass these tests are correct — a critical feature current datasets lack. We hope this benchmark will lead to further innovations in problem solving and code generation. Read More
Intel’s ControlFlag Debugging Tool Uses AI To Clean Up Code And It’s Now Open Source
In 2020 a study showed the IT industry spent an estimated $2 trillion in software development associated with debugging code. The study also showed that 50 percent of IT budgets were allocated to debugging code alone. Intel hopes to change those numbers by making its ControlFlag tool open-source.
ControlFlag is an AI-powered tool created by Intel to detect bugs within computer code using advanced self-supervised machine learning (ML). The software developed last year was able to locate hundreds of confirmed software defects in proprietary, production-quality software systems in just a few analyses of source code repositories. Its machine learning techniques enable it to find coding anomalies, reduce time spent debugging and improving the quality and security of systems autonomously. Read More
AI can write code like humans-mistakes and all
Some software developers Now let artificial intelligence Help write their code. They found that artificial intelligence is as flawed as humans.
Last June, GitHub, Subsidiary Microsoft Provide tools for hosting and collaborative code, Released A beta version of a program that uses AI to assist programmers.Start typing commands, database queries or requests to API, and then call the program Co-pilot, Will guess your intentions and write the rest.
Alex NakaA data scientist at a biotechnology company, he signed up for the Copilot test. He said that the program was very helpful and changed the way he works. “It allows me to spend less time jumping to the browser to find API documentation or examples on Stack Overflow,” he said. “It feels a bit like my job has changed from a code generator to a code discriminator.” Read More
Machine learning’s crumbling foundations
Technological debt is insidious, a kind of socio-infrastructural subprime crisis that’s unfolding around us in slow motion. Our digital infrastructure is built atop layers and layers and layers of code that’s insecure due to a combination of bad practices and bad frameworks.
Even people who write secure code import insecure libraries, or plug it into insecure authorization systems or databases. Like asbestos in the walls, this cruft has been fragmenting, drifting into our air a crumb at a time.
We ignored these, treating them as containable, little breaches and now the walls are rupturing and choking clouds of toxic waste are everywhere. Read More
OpenAI Codex Live Demo
Program Synthesis with Large Language Models
This paper explores the limits of the current generation of large language models for program synthesis in general purpose programming languages. We evaluate a collection of such models (with between 244M and 137B parameters) on two new benchmarks, MBPP and MathQA-Python, in both the few-shot and fine-tuning regimes. Our benchmarks are designed to measure the ability of these models to synthesize short Python programs from natural language descriptions. The Mostly Basic Programming Problems (MBPP) dataset contains 974 programming tasks, designed to be solvable by entry-level programmers. The MathQA-Python dataset, a Python version of the MathQA benchmark, contains 23914 problems that evaluate the ability of the models to synthesize code from more complex text. On both datasets, we find that synthesis performance scales log-linearly with model size. Our largest models, even without finetuning on a code dataset, can synthesize solutions to 59.6% of the problems from MBPP using few-shot learning with a well-designed prompt. Fine-tuning on a held-out portion of the dataset improves performance by about 10 percentage points across most model sizes. On the MathQA-Python dataset, the largest fine-tuned model achieves 83.8% accuracy. Going further, we study the model’s ability to engage in dialog about code, incorporating human feedback to improve its solutions. We find that natural language feedback from a human halves the error rate compared to the model’s initial prediction. Additionally, we conduct an error analysis to shed light on where these models fall short and what types of programs are most difficult to generate. Finally, we explore the semantic grounding of these models by fine-tuning them to predict the results of program execution. We find that even our best models are generally unable to predict the output of a program given a specific input. Read More
#devops, #nlpHow to Hack APIs in 2021
Baaackkk iiin myyy dayyyyy APIs were not nearly as common as they are now. This is due to the explosion in the popularity of Single Page Applications (SPAs). 10 years ago, web applications tended to follow a pattern where most of the application was generated on the server-side before being presented to the user. Any data that was needed would be gathered directly from a database by the same server that generates the UI.
Many modern web applications tend to follow a different model often referred to as an SPA (Single Page Application). In this model there is typically an API backend, a JavaScript UI, and database. The API simply serves as an interface between the webapp and the database. All requests to the API are made directly from the web browser.
This is often a better solution because it is easier to scale and allows more specialised developers to work on the project, i.e. frontend developers can work on the frontend while backend developers work on the API. These apps also tend to feel snappier because page loads are not required for every request.
… All this to say – there are APIs everywhere now, so we should know how to hack and secure them. Read More
OpenAI can translate English into code with its new machine learning software Codex
AI research company OpenAI is releasing a new machine learning tool that translates the English language into code. The software is called Codex and is designed to speed up the work of professional programmers, as well as help amateurs get started coding.
In demos of Codex, OpenAI shows how the software can be used to build simple websites and rudimentary games using natural language, as well as translate between different programming languages and tackle data science queries. Users type English commands into the software, like “create a webpage with a menu on the side and title at the top,” and Codex translates this into code. The software is far from infallible and takes some patience to operate, but could prove invaluable in making coding faster and more accessible. Read More
Eden AI launches platform to unify ML APIs
Large cloud vendors like Amazon, Google, Microsoft, and IBM offer APIs enterprises can use to take advantage of powerful AI models. But comparing these models — both in terms of performance and cost — can be challenging without thorough planning. Moreover, the siloed nature of the APIs makes it difficult to unify services from different vendors into a single app or workflow without custom engineering work, which can be costly.
These challenges inspired Samy Melaine and Taha Zemmouri to found Eden AI (previously AI-Compare) in 2020. The platform draws on AI APIs from a range of sources to allow companies to mix and match models to suit their use case. Eden AI recently launched what it calls an AI management platform, which the company claims simplifies the use — and integration — of various models for end users. Read More