Hire from these 9 AI-vy League companies, not Ivy League schools

A Harvard diploma, a PhD, or stint at Google are no longer the best signifiers of the top minds in artificial intelligence. Instead, hirers should look for engineers and researchers with applied AI experience at a group of nine startups that our data shows have the highest concentration of AI talent.

The past seven years have seen a de-credentialization of the AI hiring space as demand for engineering talent in the field explodes. The percentage of AI hires that come from top schools or have PhDs has dropped significantly from a peak in 2015, according to data from SignalFire’s own Beacon AI data platform.   – Read More

#strategy

Large Model Security and Ethics Research Report 2024

The rapid rise of large model applications has introduced some unique new risks to AI security that are different from previously discovered risks, such as prompt risks, including prompt injection and adversarial attacks. In response to the new security risks unique to such large models, we have built a prompt security evaluation platform, which is specially used to simulate the behavior of attackers to understand the security and performance of large models in risk scenarios associated with prompts. The purpose of this evaluation platform is to automatically discover potential innate security risks in advance before the large model goes online. It also assists the business in reducing risk during the process of launching the large model to ensure that its response content complies with various laws and regulations such as the “Interim Measures for the Management of Generative AI Services”. Therefore, prompt security assessment requires automated attack sample generation and automated risk analysis capabilities.  – Read More

#china-ai