BABL AI Inc.

BABL AI Inc.

Who We Are

We are BABL AI, a boutique consulting and audit firm focused on responsible AI. We believe that algorithms should be developed, deployed, and governed in ways that prioritize human flourishing.

We unlock the value of responsible AI for clients by combining leading research expertise and extensive practitioner experience in AI and organizational ethics to drive impactful change at the frontier of technology and emerging standards. Our team consists of leading experts and practitioners on AI, Ethics, Law, and Machine Learning.

EEOC AI Bias Audit

EEOC AI Bias Audit

Lunchtime BABLing 21

Lunchtime BABLing 21

Пікірлер

  • @vanessa1707
    @vanessa17072 күн бұрын

    You are amazing! you should have more subscribers. Thank you for the nuggets of wisdom. First video on ai ethics and governance that showcases the practical way to practice it and be it! thank you

  • @abreton7
    @abreton79 күн бұрын

    Good content but you have to talk LOUDER or speak into the mic more.

  • @lhiwaya
    @lhiwaya13 күн бұрын

    Thank you for sharing your knowledge! What methodologies can you use for doing fundamental rights or human rights impact assessments? Is it the framework described in one of your publications, "A Framework for Assurance Audits of Algorithmic Systems"? I'm also aware of others, such as the HRIA methodology for digital activities from the Danish Institute of Human Rights. Are they comparable?

  • @BobaFettUccine22
    @BobaFettUccine2214 күн бұрын

    It already sounds quite a lot. I would say that split should be 85% dependency on the developers and 15% on the users, it might differ if you have a computer vision no code platform to train whatever you want, but that would be much easier to ban risky use cases (i.e. face recognition) on the dev side!

  • @tayyabfalak4183
    @tayyabfalak418326 күн бұрын

    Yup.the more attractive thing i am looking for in AI is the ability for risk assessment. As this is crucial aspect while auditing the entity...

  • @WandaBarquinG
    @WandaBarquinGАй бұрын

    Thank you!

  • @devsuniversity
    @devsuniversityАй бұрын

    Is there an alignment between ISO 42001 (AI Management) and NIST? @bablai

  • @bablai
    @bablaiАй бұрын

    The NIST AI RMF does have quite a bit of overlap with ISO 42001, in that many elements of the Govern, Map, Measure, and Manage function can be mapped onto ISO 42001 controls. However, it's not a perfect mapping, and NIST is both more high-level and very specific at the same time. For example, these Generative AI guidelines that NIST released are not present at all in ISO 42001.

  • @devsuniversity
    @devsuniversityАй бұрын

    Great stream!

  • @ControlAI
    @ControlAIАй бұрын

    A very needed solution!

  • @jose91a
    @jose91a2 ай бұрын

    What happens from now on could set precedents that will make history. The war has only just begun. Today, I read the law.

  • @Indibugs
    @Indibugs2 ай бұрын

    An excellent walk through! Many thanks!

  • @reinouttuytten1228
    @reinouttuytten12282 ай бұрын

    Could you elaborate on who is obligated to carry out a FRIA? Chapter 3, section 3, article 27 says "deployers that are bodies governed by public law, or are private entities providing public services, and deployers high-risk AI systems referred to in points 5 (b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights" but section 3 itself is named "Obligations of providers and deployers of high-risk AI systems and other parties" also mentioning providers. In another episode, you mentioned that carrying out a FRIA shows the commitment of the enterprise to its customers, building mutual trust and early differentiating from competitors that choose to postpone this process. I quote: "Now's the time to do that because pretty soon everybody's going to have to do this and you're just going to be one among a sea of people who are only meeting the floor of that regulation". Do you imply that in the future a FRIA will be mandatory for every business that implements AI systems or solely for businesses that implement High-Risk AI systems? Thanks for the great content by the way!

  • @bablai
    @bablaiАй бұрын

    Most businesses will not need to conduct a FRIA as it's written in the law, only public orginisations and certain other private companies offering public services and credit scores and insurance. However, the risk assessment process as outlined in Article 9 is not dissimilar from a FRIA, so Providers of high-risk AI systems will effectively be completing assessments that have to consider the fundamental rights.

  • @devsuniversity
    @devsuniversity2 ай бұрын

    Awesome video! I am interested in the research of AI auditing techniques.

  • @bablai
    @bablai2 ай бұрын

    Glad you liked it! You can check out our recent paper if you want: arxiv.org/abs/2401.14908

  • @VegascoinVegas
    @VegascoinVegas2 ай бұрын

    I am just a novice. I need to find an AI Governance platform. It will be used for collaboration among sports related content creation. I could use help finding a basic AI Governance platform. Fact sheet generation will be important. Thank you.

  • @MegaSusie77
    @MegaSusie772 ай бұрын

    🎉❤ love your sharing. How may i connect with you

  • @bablai
    @bablai2 ай бұрын

    I'm on LinkedIn here: www.linkedin.com/in/shea-brown-26050465/

  • @amirhussain3028
    @amirhussain30283 ай бұрын

    Ahead of the curve

  • @nagalakshmisuren1659
    @nagalakshmisuren16593 ай бұрын

    Hello, I am interested in enrolling in the course. Is it possible to still use the coupon code?

  • @user-mq5du4td7j
    @user-mq5du4td7j3 ай бұрын

    Request: Is there a demand for AI Ethics Advisor/Consultant ? how do you see the growth of this role in AI companies?

  • @bablai
    @bablai3 ай бұрын

    Now that the EU AI Act has passed, I think you'll find companies will be looking for help in preparing and maintaining their AI governance and risk management... AI ethics is an important part of that. The key will be to find a way to get yourself noticed, and to specialize in a particular niche that companies need. For this, talking with as many potential clients will be the best method.

  • @devsuniversity
    @devsuniversity3 ай бұрын

    Saved this video! Thanks! This video should have more likes!

  • @bablai
    @bablai3 ай бұрын

    Thanks, we agree :)

  • @_Solmega
    @_Solmega3 ай бұрын

    I work for an executive consulting firm partnering with candidates to place C-level and -1 level leaders for some pretty large companies. How do you think that translates into AI ethics? My role is to assess HR leaders, so I don't actually work in HR. More so recruiting. Would love to get into ethical AI work but don't know how to make the connection. Any ideas are welcome!

  • @bablai
    @bablai3 ай бұрын

    Most of our work currently is in the HR space, as this will be where AI is making a lot of impact, and regulations are focusing their efforts. The connection is easy to make, and it probably just starts with you reading and speaking/posting about it on social media (LinkedIn is great for this). Adding some courses in this area can't hurt to build credibility; shameless plug for our courses here :) babl.ai/courses/ and "Lunchtime BABLing" listeners can always save 20% off all our online courses using coupon code "BABLING"

  • @elbertslonaker1874
    @elbertslonaker18743 ай бұрын

    👉 Promo-SM

  • @gestaoti4u508
    @gestaoti4u5084 ай бұрын

    Hi, thanks so much! I am from Brazil and I am looking for options to learn ethical ai deeply, because we don't have good options here, yet.

  • @ArucardPL
    @ArucardPL4 ай бұрын

    From another vide about this I heard that deepfakes (they called it audio and video altering porgrams) won't be considered high-risk hence they will be left unregulated. Like, WTF? Is this a joke?

  • @CharlesBudde-vx6vi
    @CharlesBudde-vx6vi4 ай бұрын

    Putting wishful thinking into law. EU AI Act, sure sounds good. Please start with defining just exactly how it will be monitored. Just wrap your head logically around how insanely and clearly impossible THAT is. Now while you are realizing, despite the very plain fact that regulation is imperative AND impossible - now think about what the structure of enforcement is going to look like. Neither exists. Neither can possibly exist. Either would be a structure that would dwarf all the governments of the EU put together. If you are saying the regulations are like the 10 Commandments (good ideas that you really should adhere to) then I say have at it. Anyone who entertains the notion that 'regulation' is going to somehow govern how, where, when AI is used, invoked, configured or injected - they are deluded.

  • @nenestark2935
    @nenestark29354 ай бұрын

    Thank you immensely. Your videos are really helping me, I plan to watch them chronologically. I am seeking to embark on a career in AI ethics consulting. A simple question, what if I am good in some of the skills you mentioned but I have no certifications?

  • @ivlivs.c3666
    @ivlivs.c36665 ай бұрын

    good points

  • @bablai
    @bablai5 ай бұрын

    Thank you! Feel free to let us know if there is anything you'd like us to discuss in future episodes.

  • @jasonraymond1638
    @jasonraymond16385 ай бұрын

    Greetings from the UK. I am about to take a short AI course so this discussion was useful. I am also looking at the AI and Algorithm Auditor Certification Program. Will the material in the auditor course allow me to operate in the UK and EU?

  • @bablai
    @bablai4 ай бұрын

    The field is currently unregulated, so no formal certification is required to operate anywhere. However, we are in close contact with UK and EU regulators about these issues. See, e.g.: www.gov.uk/ai-assurance-techniques/babl-ai-conducting-third-party-audits-for-automated-employment-decision-tools and github.com/algorithmicbiaslab/public-resources/blob/main/policy/eu/eu-comm_dsa_2023-11-20.pdf

  • @svp912
    @svp912Ай бұрын

    My

  • @sireeshasagi4447
    @sireeshasagi44475 ай бұрын

    Can you please share more details. what is the format of the course Recorded session or Live training? what is the exam pattern? Can we take course in our own pace etc..

  • @bablai
    @bablai4 ай бұрын

    Great questions. The lectures are recorded, but we have Q&A over Zoom most weeks, a student community (Slack), and students take the courses at their own pace. Exams are once per month. More information can be found here: courses.babl.ai/p/ai-and-algorithm-auditor-certification

  • @ledzepellinrocks
    @ledzepellinrocks5 ай бұрын

    Which font do you use in your slides?

  • @bablai
    @bablai5 ай бұрын

    Oranienbaum for the headers/titles, and Raleway for the regular text.

  • @pregashield9603
    @pregashield96035 ай бұрын

    Thanks!

  • @gavinskitt373
    @gavinskitt3735 ай бұрын

    This is great. Once I complete the iapp's AIGP cert, I will be doing this training to niche down.

  • @bablai
    @bablai5 ай бұрын

    We'd love to have you! Let us know if you have any questions!

  • @gavinskitt373
    @gavinskitt3735 ай бұрын

    Just discovered your channel and these are exactly the kind of thoughts and conversations I need to hear as I am learning ai governance

  • @bablai
    @bablai5 ай бұрын

    Glad you like it! Let us know if there is something you'd like us to talk about.

  • @Rezzapee
    @Rezzapee5 ай бұрын

    What does this mean

  • @viktoshany
    @viktoshany6 ай бұрын

    Is your certificate program recognized by any regulatory body?

  • @bablai
    @bablai5 ай бұрын

    Not at the moment, though we're working on socializing it in both the EU and US. We are recognized by the newly formed International Association of Algorithmic Auditors (Shea is one of the founding members). We talk about it here: kzread.info/dash/bejne/moKFj8xvfs23pco.html

  • @gemeridik
    @gemeridik7 ай бұрын

    Good discussion

  • @ivlivs.c3666
    @ivlivs.c366610 ай бұрын

    very informative, thank you!

  • @thailevan3720
    @thailevan372010 ай бұрын

    thank you

  • @ivlivs.c3666
    @ivlivs.c366610 ай бұрын

    I've been trying to find a way to combine my non-traditional background and skills to contribute to this field, but didn't really know where to start until I found this and your other videos. Really insightful. Just subscribed. Thank you!

  • @tawesmessaoudini218
    @tawesmessaoudini21811 ай бұрын

    Amazing content, thank you!

  • @subtlethingsinlife
    @subtlethingsinlife Жыл бұрын

    Hi . I am trying to pursue Masters in AI governance, Policy formulation, Ethics but I am unable to find a start. Currently, I am freelancing and have background in Data Science. Do I need to have a background in Law too, to enter into this field. As I don't have any connections in this field, I am curious , whether I can break into this emerging field .

  • @bablai
    @bablai4 ай бұрын

    Checking back in some months later... how's it going?

  • @SidPrahlad
    @SidPrahlad2 ай бұрын

    Hello, checking to.see if you have achieved this at all?

  • @MichelCodere
    @MichelCodere Жыл бұрын

    thanks, very interesting! In the are os system engineering and software development for military aviation, I can see strong similarities on how we do technical risk management in weapons and software: formal risk management processes and formal compliance were usually required for high-risk areas. Some products to demonstrate compliance needed to be identified up-front as they were very costly or even impossible to develop after the fact (such as test results). I think AI auditing can use some of these principles, as certain things, such as data sources and explainability, can be difficult and costly to develop when an audit happens. And the priority to develop these should be based on explicit and clear risk management process.

  • @Xejejipi
    @Xejejipi Жыл бұрын

    I would love to work with you. Thank you for posting this you’re a great resource!

  • @tiffanybass9133
    @tiffanybass9133 Жыл бұрын

    Great Video! I’m interested in pursuing my PhD and your videos have been really helpful with exploring AI Assessment. Is there a way to connect with you? Coming from a Cybersecurity background, I’m looking forward to enrolling in these courses to learn more AI specific information.

  • @cyprus-_-killer-hd01b3
    @cyprus-_-killer-hd01b3 Жыл бұрын

    Keep up the good work sir very I’ve video that help me a lot !

  • @basabbhattacharya193
    @basabbhattacharya193 Жыл бұрын

    😊😊😊😊😊😊😊😊😊😊😊😊

  • @adamhorowitz7117
    @adamhorowitz7117 Жыл бұрын

    I would be open to an audit with BABL...but who can realistically take a course and then be proficient with an AI audit. I would need experience, and I am running an AI project right now..still. it is alot.

  • @tiffanybass9133
    @tiffanybass9133 Жыл бұрын

    Thank you for this! I am interested in pursuing my PhD in Information and Assurance. I have interest in Bias in AI but was finding it hard time incorporating into my studies. Your videos have helped highlight AI Auditing.

  • @bablai
    @bablai Жыл бұрын

    Glad it was useful Tiffany! Let us know if there are any topics you’d like us to cover.

  • @ledzepellinrocks
    @ledzepellinrocks5 ай бұрын

    Mr. Brown, is there a way to reach you?

  • @muralidharsubramniam1364
    @muralidharsubramniam1364 Жыл бұрын

    Very interesting talk!! Thanks

  • @gracethomson
    @gracethomson Жыл бұрын

    Excellent guest, Merve Hickok is one the 100 Brilliant Women in AI. She is one of the most influential voices in AI ethics and governance. Brilliant!

  • @bablai
    @bablai Жыл бұрын

    Thanks, Grace, we were lucky to have her!

  • @bablai
    @bablai Жыл бұрын

    Timestamps: 00:00 - Intro 01:45 - Why BABL is requiring risk assessment as part of our bias audit 03:50 - Difference between "risk assessment" and "impact assessment" 04:26 - Goal of the risk assessment 05:46 - (2) Identify all stakeholders 07:24 - (3) Focus: harms vs. interests 09:23 - (1) Examine the full context: the CIDA narrative 12:48 - (4) Generate list of harms and causes 14:45 - (5) Connect harms to causes 17:39 - (6) Create mitigation strategies 21:23 - The importance of diverse multi-stakeholder perspectives 22:16 - Recap of steps 25:45 - Q: Can risk assessment be an automated process? 29:24 - Q: Thoughts on NIST Risk Management Framework 30:57 - Closing thoughts

  • @bablai
    @bablai Жыл бұрын

    Timestamps: 00:00 - Intro 01:05 - Algorithmic Auditing International Conference 04:17 - Common ground: "socio-technical" 06:15 - Common ground: "public by default" 07:59 - In debate: "independence" and terminologies 09:00 - What we did at the conference 10:08 - Main message to regulators 11:46 - Challenges with lawmaking and auditing as a solution 13:52 - A turning point for algorithmic auditing 15:33 - Q: What should a socio-technical audit look like? 18:29 - Q: What are socio-technical risks? 21:18 - Closing thoughts