From robots delivering coffee to office chairs rearranging themselves after a meeting, a smart city project in China aims to put artificial intelligence in charge, its creators told a conference this week – raising some eyebrows.
Danish architecture firm BIG and Chinese tech company Terminus discussed plans to build an AI-run campus-style development in the southwestern Chinese city of Chongqing during an online panel at Web Summit, a global tech conference. Read More
Monthly Archives: December 2020
Bjarke Ingels Is Designing an Artificial Intelligence-Based ‘Smart City’ in China
A space where the barriers between humans and artificial intelligence will be removed.
After smartphones and smart homes, the next logical step is smart cities. Who better to take us there then superstar architect Bjarke Ingels, a noted off-the-wall thinker, and Terminus Group, a burgeoning Chinese tech firm that specializes in smart services.
Bjarke Ingles Group (BIG) recently unveiled plans for a striking, high-tech hub that will become the future headquarters for the firm. Read More
Driving The Next Generation of AI
This article is a response to an article arguing that an AI Winter maybe inevitable. However, I believe that there are fundamental differences between what happened in the 1970s (the fist AI winter) and late 1980s (the second AI winter with the fall of Expert Systems) with the arrival and growth of the internet, smart mobiles and social media resulting in the volume and velocity of data being generated constantly increasing and requiring Machine Learning and Deep Learning to make sense of the Big Data that we generate.
The rapid growth in Big Data has driven much of the growth in AI alongside reduced cost of data storage (Cloud Servers) and Graphical Processing Units (GPUs) making Deep Learning more scalable. Data will continue to drive much of the future growth of AI, however, the nature of the data and the location of its interaction with AI will change. This article will set out how the future of AI will increasingly be alongside data generated at the edge of the network (on device) closer to the user. This will have the advantage that latency will be lower and 5G networks will enable a dramatic increase in device connectivity with much greater capacity to connect IoT devices relative to 4G networks. Read More
Your Brain Doesn’t Work the Way You Think It Does
A conversation with neuroscientist Lisa Feldman Barrett on the counterintuitive ways your mind processes reality—and why understanding that might help you feel a little less anxious.
At the very beginning of her new book Seven and a Half Lessons About the Brain, psychology professor Lisa Feldman Barrett writes that each chapter will present “a few compelling scientific nuggets about your brain and considers what they might reveal about human nature.” Though it’s an accurate description of what follows, it dramatically undersells the degree to which each lesson will enlighten and unsettle you. It’s like lifting up the hood of a car to see an engine, except that the car is you and you find an engine that doesn’t work at all like you thought it did.
For instance, consider the fourth lesson, You Brain Predicts (Almost) Everything You Do. “Neuroscientists like to say that your day-today experience is a carefully controlled hallucination, constrained by the world and your body but ultimately constructed by your brain,” writes Dr. Barrett, who is a University Distinguished Professor at Northeastern and who has research appointments at Harvard Medical School and Massachusetts General Hospital. “It’s an everyday kind of hallucination that creates all of your experiences and guides all your actions. It’s the normal way that your brain gives meaning to the sensory inputs from your body and from the world (called “sense data”), and you’re almost always unaware that it’s happening.” Read ore
Contrastive Learning of Medical Visual Representations from Paired Images and Text
Learning visual representations of medical images is core to medical image understanding but its progress has been held back by the small size of hand-labeled datasets. Existing work commonly relies on transferring weights from ImageNet pretraining, which is suboptimal due to drastically different image characteristics,or rule-based label extraction from the textual report data paired with medical images, which is inaccurate and hard to generalize. We propose an alternative unsupervised strategy to learn medical visual representations directly from the naturally occurring pairing of images and textual data. Our method of pretraining medical image encoders with the paired text data via a bidirectional contrastive objective between the two modalities is domain-agnostic, and requires no additional expert input. We test our method by transferring our pretrained weights to 4 medical image classification tasks and 2 zero-shot retrieval tasks, and show that our method leads to image representations that considerably outperform strong base-lines in most settings. Notably, in all 4 classification tasks, our method requires only 10% as much labeled training data as an ImageNet initialized counterpart to achieve better or comparable performance, demonstrating superior data efficiency. Read More
Ignite your AI curiosity with Dr. Andrew Ng
The Delicate Art of Making Robots That Don’t Creep People Out
The robot Digit stands approximately five feet, four inches high, with a metallic torso the teal color of a hospital worker’s scrubs. It can walk up and down staircases and around corners on two legs, and lift, carry, and stack boxes up to 40 pounds with arms whose hinges evoke the broad shoulders of a swimmer.
Agility Robotics, Digit’s manufacturer, shipped roughly 30 of these robots earlier this year to industrial and academic clients.
… It did not anticipate a swift early consensus that the robot gave people the creeps. Read More
Google Reveals Major Hidden Weakness In Machine Learning
In recent years, machines have become almost as good as humans, and sometimes better, in a wide range of abilities — for example, object recognition, natural language processing and diagnoses based on medical images.
And yet machines trained in this way still make mistakes that humans would never fall for.
… So computer scientists are desperate to understand the limitations of machine learning in more detail. Now a team made up largely of Google computer engineers have identified an entirely new weakness at the heart of the machine learning process that leads to these problems. Read More
Read the paper
IBM Cloud gets quantum-resistant cryptography
IBM Corp. is looking to make enterprise workloads deployed on its public cloud resistant to tomorrow’s encryption-breaking quantum computers.
As a first step to that end, the company today introduced “quantum-safe cryptography” capabilities for three services in IBM Cloud: Red Hat OpenShift on IBM Cloud, Cloud Kubernetes Service and Key Protect. Customers using the services can now secure data with an encryption algorithm that will have a better chance of withstanding future quantum attacks, according to the company. Read More
AlphaFold: a solution to a 50-year-old grand challenge in biology
Proteins are essential to life, supporting practically all its functions. They are large complex molecules, made up of chains of amino acids, and what a protein does largely depends on its unique 3D structure. Figuring out what shapes proteins fold into is known as the “protein folding problem”, and has stood as a grand challenge in biology for the past 50 years. In a major scientific advance, the latest version of our AI system AlphaFold has been recognised as a solution to this grand challenge by the organisers of the biennial Critical Assessment of protein Structure Prediction (CASP). This breakthrough demonstrates the impact AI can have on scientific discovery and its potential to dramatically accelerate progress in some of the most fundamental fields that explain and shape our world. Read More