The rapid progress in synthetic image generation and manipulation has now come to a point where it raises significant concerns for the implications towards society. At best,this leads to a loss of trust in digital content, but could potentially cause further harm by spreading false information or fake news. This paper examines the realism of state-of-the-art image manipulations, and how difficult it is to detect them, either automatically or by humans.
To standardize the evaluation of detection methods, we propose an automated benchmark for facial manipulation detection. In particular, the benchmark is based on Deep-Fakes [1], Face2Face [59], FaceSwap [2] and NeuralTextures [57] as prominent representatives for facial manipulations at random compression level and size. The benchmark is publicly available2and contains a hidden test set as well as a database of over1.8million manipulated images. This dataset is over an order of magnitude larger than comparable, publicly available, forgery datasets. Based on this data,we performed a thorough analysis of data-driven forgery detectors. We show that the use of additional domain-specific knowledge improves forgery detection to unprecedented accuracy, even in the presence of strong compression, and clearly outperforms human observers. Read More
Daily Archives: August 11, 2020
High-Resolution Neural Face Swapping for Visual Effects
In this paper, we propose an algorithm for fully automatic neural face swapping in images and videos. To the best of our knowledge, this is the first method capable of rendering photo-realistic and temporally coherent results at megapixel resolution.To this end, we introduce a progressively trained multi-way comb network and a light- and contrast-preserving blending method.We also show that while progressive training enables generation of high-resolution images, extending the architecture and training data beyond two people allows us to achieve higher fidelity in generated expressions. When compositing the generated expression onto the target face, we show how to adapt the blending strategy to preserve contrast and low-frequency lighting.Finally, we incorporate a refinement strategy into the face landmark stabilization algorithm to achieve temporal stability, which is crucial for working with high-resolution videos. We conduct an extensive ablation study to show the influence of our design choices on the quality of the swap and compare our work with popular state-of-the-art methods. Read More
3 Lessons from Chinese Firms on Effective Digital Collaboration
Collaboration between organizations has never been more important. In the face of the current pandemic, a collaborative approach can help address market failures resulting from information asymmetry, misaligned incentives, or a lack of market intermediaries. Yet many companies restrict their partnerships to formal mechanisms such as joint ventures, limiting the extent of their collaboration.
Useful inspiration can come from China, where Covid-19 is but one of many crises that businesses have faced, and where a variety of pressures and opportunities have shaped a set of distinctive partnering practices. Through its rapid transformation from an economy lacking in basic commercial infrastructure to an e-commerce pioneer, China has emerged as a laboratory for developing new collaboration strategies. Read More
How to get your data scientists and data engineers rowing in the same direction
In the slow process of developing machine learning models, data scientists and data engineers need to work together, yet they often work at cross purposes. As ludicrous as it sounds, I’ve seen models take months to get to production because the data scientists were waiting for data engineers to build production systems to suit the model, while the data engineers were waiting for the data scientists to build a model that worked with the production systems.
A previous article by VentureBeat reported that 87% of machine learning projects don’t make it into production, and a combination of data concerns and lack of collaboration were primary factors. On the collaboration side, the tension between data engineers and data scientists — and how they work together — can lead to unnecessary frustration and delays. While team alignment and empathy building can alleviate these tensions, adopting some developing MLOps technologies can help mitigate issues at the root cause. Read More
How Do Data Science Machine Learning And Artificial Intelligence Overlap
In conjunction with data science and digital transformation, you have probably heard the terms of artificial intelligence, machine learning, and deep learning is used. You might wonder what the relationship between those topics is. How do companies in industries range from biopharma to chemicals to food & beverage that incorporate AI, machine learning, and data science to enhance their processes? AI and machine learning allow applications such as virtual digital assistants, facial recognition, and self-driving cars, as well as improvements in healthcare diagnostics and process manufacturing. Are you interested in making a career in these? There are many AI certification courses, data science certification courses, and ML certifications available online. Check out! Read More
A critical analysis of metrics used for measuring progress in artificial intelligence
Comparing model performances on benchmark datasets is an integral part of measuring and driving progress in artificial intelligence. A model’s performance on a benchmark dataset is commonly assessed based on a single or a small set of performance metrics. While this enables quick comparisons, it may also entail the risk of inadequately reflecting model performance if the metric does not sufficiently cover all performance characteristics. Currently, it is unknown to what extent this might impact current benchmarking efforts. To address this question, we analysed the current landscape of performance metrics based on data covering 3867 machine learning model performance results from the web-based open platform ‘Papers with Code’. Our results suggest that the large majority of metrics currently used to evaluate classification AI benchmark tasks have properties that may result in an inadequate reflection of a classifiers’ performance, especially when used with imbalanced datasets. While alternative metrics that address problematic properties have been proposed, they are currently rarely applied as performance metrics in benchmarking tasks. Finally, we noticed that the reporting of metrics was partly inconsistent and partly unspecific, which may lead to ambiguities when comparing model performances. Read More