BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//LHoFT - ECPv6.15.17//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://lhoft.com
X-WR-CALDESC:Events for LHoFT
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20180101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20190424T173000
DTEND;TZID=UTC:20190424T173000
DTSTAMP:20260403T194359
CREATED:20190424T085202Z
LAST-MODIFIED:20190424T085202Z
UID:34530-1556127000-1556127000@lhoft.com
SUMMARY:Is AI the solution to biased AI?
DESCRIPTION:Description\nAs enterprises build and deploy artificial intelligence systems\, it’s important to understand the ethical considerations of our work. Ethics are not a separate business objective bolted on after an AI system has been deployed. They are part of business performance. Only by embedding ethical principles into AI applications and processes can we build systems that people can trust. \nAs AI advances\, and humans and AI systems increasingly work together\, it is essential that we trust the output of these systems to inform our decisions. Alongside policy considerations and business efforts\, science has a central role to play: developing and applying tools to wire AI systems for trust. To encourage the adoption of AI\, we must ensure it does not take on and amplify our biases and knowing how an AI system arrives at an outcome is key to trust\, particularly for enterprise AI. \nIBM Research has open-sourced AI Fairness 360 (http://aif360.mybluemix.net)\, a comprehensive open-source toolkit of metrics and algorithms to check for and mitigate unwanted bias in AI\, to help the community engender trust in AI. IBM also launched its Trust & Transparency service as part of AIOpenScale (https://www.ibm.com/cloud/ai-openscale). This service provides explanations into how AI decisions are being made\, and automatically detect and mitigate bias to produce fair\, trusted outcomes. \nIn this meetup we will explore the ‘dangers of AI’ being bias\, (lack of) explanation and robustness issues. Next to that we will explore the AI Fairness 360 toolkit and IBM’s Trust & Transparency service. Hands on examples will be available. \nSpeaker: Stefan Van Den Borre\, Technical Professional – Watson Data Platform\, IBM. \nDoors open at 17:30 hrs.\nSession intended to start at 18:00 hrs.
URL:https://lhoft.com/event/is-ai-the-solution-to-biased-ai-2/
CATEGORIES:Webinar
ATTACH;FMTTYPE=:
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20190424T173000
DTEND;TZID=UTC:20190424T173000
DTSTAMP:20260403T194359
CREATED:20190424T085202Z
LAST-MODIFIED:20190424T085202Z
UID:20813-1556127000-1556127000@lhoft.com
SUMMARY:Is AI the solution to biased AI?
DESCRIPTION:Description\nAs enterprises build and deploy artificial intelligence systems\, it’s important to understand the ethical considerations of our work. Ethics are not a separate business objective bolted on after an AI system has been deployed. They are part of business performance. Only by embedding ethical principles into AI applications and processes can we build systems that people can trust. \nAs AI advances\, and humans and AI systems increasingly work together\, it is essential that we trust the output of these systems to inform our decisions. Alongside policy considerations and business efforts\, science has a central role to play: developing and applying tools to wire AI systems for trust. To encourage the adoption of AI\, we must ensure it does not take on and amplify our biases and knowing how an AI system arrives at an outcome is key to trust\, particularly for enterprise AI. \nIBM Research has open-sourced AI Fairness 360 (http://aif360.mybluemix.net)\, a comprehensive open-source toolkit of metrics and algorithms to check for and mitigate unwanted bias in AI\, to help the community engender trust in AI. IBM also launched its Trust & Transparency service as part of AIOpenScale (https://www.ibm.com/cloud/ai-openscale). This service provides explanations into how AI decisions are being made\, and automatically detect and mitigate bias to produce fair\, trusted outcomes. \nIn this meetup we will explore the ‘dangers of AI’ being bias\, (lack of) explanation and robustness issues. Next to that we will explore the AI Fairness 360 toolkit and IBM’s Trust & Transparency service. Hands on examples will be available. \nSpeaker: Stefan Van Den Borre\, Technical Professional – Watson Data Platform\, IBM. \nDoors open at 17:30 hrs.\nSession intended to start at 18:00 hrs.
URL:https://lhoft.com/event/is-ai-the-solution-to-biased-ai/
CATEGORIES:Webinar
ATTACH;FMTTYPE=:
END:VEVENT
END:VCALENDAR