Data Ethics Lead, The Central Digital and Data Office, The Cabinet Office
TALK
The growing capability and availability of generative language models has enabled a wide range of new downstream tasks. Historical research has identified, quantified and mitigated biases present in language models but is rarely tailored to downstream tasks where wider impact on individuals and society can be felt.
This presentation shows a real-world impact of bias in language models in a situation that many us engage with throughout our lives, job ads. It quantifies the levels of bias present in language models, and appraises the most effective methods for mitigating bias in the workplace, so that the outputs of LLMs can be more inclusive.
Friday, 25 October, 15:50
Duration: 30 minutes
Track: Responsible AI
Ben has over 10 years of experience in the field of AI, digital and data ethics having worked across the public and private sectors and academia. His research interests include bias, privacy and data protection (his thesis is titled 'Legitimately Interesting', and he promises the content is at least faintly engaging), and the impacts of AI on creative industries.
Ben is the founder and organiser of the meetup community 'AI Ethics London', which will resume live events soon. Currently, Ben serves as the Data Ethics Lead at the Central Digital and Data Office.
LinkedIn: https://www.linkedin.com/in/bengilburt/
AI for the rest of us is brought to you by Kortensia. Kortensia Ltd is a company registered in England and Wales (Company No.15773675)