![]() Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission. Publishing summaries of copyrighted data used for training.Designing the model to prevent it from generating illegal content.Disclosing that the content was generated by AI.Generative AI, like ChatGPT, would have to comply with transparency requirements: Assistance in legal interpretation and application of the law.Īll high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.Migration, asylum and border control management.Access to and enjoyment of essential private services and public services and benefits.Employment, worker management and access to self-employment.Management and operation of critical infrastructure.This includes toys, aviation, cars, medical devices and lifts.Ģ) AI systems falling into specific areas that will have to be registered in an EU database: “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.ĪI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:ġ) AI systems that are used in products falling under the EU’s product safety legislation. ![]() Some exceptions may be allowed for law enforcement purposes.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |