<img src="https://secure.leadforensics.com/77233.png" alt="" style="display:none;">

Unlocking the Potential of Generative AI in EHS

26 JUNE 2025

ON-DEMAND WEBINAR

Proactive safety needs more than faster response—it requires prediction, prevention, and early intervention. Artificial Intelligence can revolutionize proactive safety, but its use in EHS is still largely underutilized. Explore how AI can improve safety outcomes by automating routine tasks, detecting risks in real-time, analyzing field data, simplifying compliance, and more.

You can expect to:

  • Discover the emerging AI tools shaping EHS and what pain points they are best suited to address
  • Explore everyday scenarios for AI to protect workers and improve performance
  • Learn how to assess your organization's readiness for AI, including the criticality of a reliable data infrastructure

Who Should Watch? Health and safety advisors and managers, industrial hygienists, anyone involved in adopting safety technologies

Transcript

Hello, everyone, and welcome to today's webinar from Blackline Safety: Unlocking AI for EHS. My name is Darcy White. I'm the Director of Demand Generation here at Blackline, and I'll be the moderator for today's webinar.

I'll be passing it over to our keynote speaker, Phil Benson, very shortly, but just need to cover a few housekeeping items first. This will be a 45 minute session, with 15 minutes for Q&A at the end.

The session will be recorded, and you'll receive an email to access it and some additional resources in the next day or so. And there's been a lot of interest in this topic, so please feel free to share the recording with any colleagues, who couldn't attend today but may benefit from the information.

And due to the large number of attendees today, everybody has been muted.

However, in the bottom right-hand of your screen, you'll see a button with "Questions".

And please submit any questions you have during the presentation. We'll get through as many of those as possible at the end during the Q&A session.

If you're having any technical issues, first up, try to just refresh your browser. If that doesn't work, you can join via Chrome, it probably works better than some of the other browsers. You can also add in any technical issues as a question, in the in the question section, and we'll do our best to support you.

Also, if you click on the "Links" button, down by the questions button in the bottom right, you can also access a few more resources, in advance, including two recent blog articles published by Phil.

Without further ado, it's my pleasure to introduce our speaker today, Phil Benson.

Phil is Blackline's Vice President of Product. So he oversees product management, industrial design, UX design, data engineering, business intelligence, and AI and machine learning services.

With a background in designing industry-leading safety products, Phil brings deep expertise in the developing user-centric, high impact solutions.

At Blackline, he leads the development of a cohesive product and software portfolio. His team is committed to pushing the boundaries of connected safety technology, delivering innovative and data-driven solutions that protect workers in the most challenging environments. So with that, I'll, pass it over to you, Phil. Take it away.

Yeah. Thanks a lot, Darcy. That's great. This is a good slide to start on, I think, as we're about to talk about, AI and GenAI, to let you know a little bit about myself and and where I am coming from with respect to this presentation and where you can expect my expertise to be and not to be.

So, so, you know, I, amongst these teams are the AI and ML team, which is really part of our data team, data engineering, AI, ML, you know, close partners along with business intelligence, within the team. I'm not an ML engineer. That's I'm not here to teach you on, you know, what are the various algorithms you're supposed to use to do that. The purpose of our conversation is to talk about we at Blackline have been working on AI, for actually a number of years, and we've had some POCs in various things, whether it's machine learning algorithms.

We're really, now we've been working for, you know, sort of, twelve months kinda thing on generative AI. We're gonna talk a lot about that today. So my purpose here is to say, okay.

Blackline's sitting on just piles of really rich data. This is great.

We work with, really great partners in and S. So for throughout my career, Honeywell and here at Blackline, I've been working with EHS professionals. So I think I have a pretty good idea, about some of their challenges, though I'm not an expert in their day to day. And I think I have a pretty good understanding of how a business can apply, ML and generative AI sort of techniques to that business, like how you can utilize your data. That's where I've been deep. So I'm really trying to be, merging some of that. I think there's been tons of talk about AI lately.

I think we're all aware of that. That's why you're joining. What I'm really trying to do is just kinda give some use cases, give some places to start, talk about some technologies so that we can kinda get over that hump of here is the hype. Great.

Here is an application or here's our day to day challenges. And, like, where can we just find that intersection so that we can start using some of these tools? So so that's it. I'm not professing to to sort of teach you ML or anything, but, but I but I think I am hoping to connect some of those dots and just make it comfortable to get started.

So, with that, we'll go into it. This is what we're gonna cover. Right? So, first thing I'm gonna start out with just a bit of an overview of, GenAI and, but also just data maturity in general.

I just wanna talk about where we start as a company, how to think about your own company so that we can frame the conversation.

And then I'm gonna get into, like, three of the key principles that that I think when you're approaching any type of AI project, especially Gen AI project, what you how you should frame your thinking. Like, if someone comes to you and says, like, oh, you should do something with AI. Like, how are you supposed to even approach that? So that's what the first topics are supposed to talk about.

Then I have sort of three, Gen AI applications. So specifically for Gen AI, and how there's just some ways to think about using that for and S. Right? And there's originally, there was fifteen different topics.

I decided to focus on just, like, kind of the top three of them. If this is valuable to people, we can look at some of the others another time. But but I think these ones really get you thinking about how to apply it to your space.

Then, then I'm gonna talk about three black line projects so you can understand sort of, like, what we're doing with our data and and how we're we're trying to sort of derive value from that, and then just some key takeaways. Just wrap it up, and then we'll get to some q and a. I think there's, probably gonna be some interesting questions, and then, we'll get going. So the first one, understanding generative AI.

Really, it's like understanding data within your company and AI. So I really like this data maturity curve. This was put out in, like, twenty twenty two from Databricks, and I think it does a good job of just everyone the way they can can frame where their company is at is, you know, there there's this maturity curve. Right?

And it starts out with clean data. Everyone talks about clean data. It's a hot topic right now. But really what you're talking about is your ability to make good reports.

Can your company make good reports that are reliable? Is there data there to make reports? And and these reports are, like, crafted by somebody and then other people can review them.

The next is ad hoc queries.

If you have a mature clean data source, you can actually do ad hoc queries where, you know, you'll, use SQL or something and you'll be able to go, oh, okay. Well, now I have a different insight. I have a different question. And you can kind of craft a query, go to your data, you can find that.

Right? And more valuable just than having data. Right? You're going up on this maturity curve.

Then there's data exploration. And this is you know, you can even do this in Power BI. Like, you can have sort of exploratory reports. Right?

You can have really good data and then you can be, sort of changing your search sort filter, drill down, drill up, and you can answer different questions. Also very valuable. You know, you're you're working your way up that data maturity curve. And I think a lot of the, companies, you know, pre eighteen months ago, when we when we weren't all so excited about AI and ML, even though it was happening very much in the field.

This is where we were, really good companies that have great reports. People could dig around and then do things.

Next is predictive modeling, and that is kind of like data science. They're not kind of like very much data science and very much, some some machine learning.

And that is saying, okay. Knowing what has happened in the past, having these reports, can I forecast statistically what I think is gonna happen in the future and when I think that's gonna happen? And that's predictive modeling. That's really the core of data science. And, like you have a hypothesis, you know, you you, you you do some, data science techniques like clustering forecasting prescriptive stuff, and then you test and see if that is indeed the outcome that you had later.

The next is prescriptive analytics. So this is like, okay, it's great to say something's gonna happen in the future. Prescriptive is saying, well, what do I do about it? Like, really, what we're all trying to get at here is like, well, what do I do about it? Data's great. Really, I want the insights and I wanna be actionable.

The top of this, you know, whatever Maslow's hierarchy of needs on this curve where you get the most business value is automated decision making. And this is where you you don't, as a human being, look at that data and you don't, as a human being, come up with that insight is you already know how this forecast you you worked this this system out with your company and with your data, and it just does things for you. And I'm not really gonna talk about that today. I don't think with GenAI that where I'm at is it should just do these things for you, but we're really close.

And I think you can build you can build models and build tools to do that today to your comfort level, to your business comfort level. A great example of that automated decision making is, like water treatment plants or even, like, city utilities water. I've seen where there's these really interesting AI models where when you're gonna have flooding in a city and they will automatically open or close different different sort of gates throughout the city to, like, let water flow if certain areas are filling up because you can't really you don't know exactly where the water's gonna fall. So they do a really cool job of sort of saying, okay.

We do automated decision making based on predictive analytics and based on the modeling that we've had. So this is the curve. You don't need to be at the top to start using these tools. You can be in the orange dots and start using Gen AI tools in your business, start thinking about how those tools work, and and sort of get started.

So, clean data, obviously important, but I I think people should get started in their own way, in a comfortable way for them really early on. And you'll then you'll figure out what data you need to clean. So that's my pitch on that. There's, this current state of Gen AI, which is just interesting.

It was actually interesting. We were going through this deck. This is from twenty twenty three, and it's just wild how quickly these things are changing. This presentation, I forget when I originally gave it to a group of, EHS professionals.

Maybe it was a few months ago. And since then, it's it's it's evolved, like, three times just because the tools I'm talking about might be a little different or or even some of the stuff we're working on internally might be a little different. So, so that's a bit old. But I I I think it's very true in the in this corner is that, you know, there's there's 50% of people are, like, actively assessing.

They're considering it. They're just trying to figure out how to apply it. So that's really where I'm targeting this conversation.

Distinguishing be between concepts. I've already been thrown around terms a little bit.

AI, it's we're talking about all of this. It it's sort of like a, a computer that is using math.

Yeah. A computer that is using math, to sort of recognize patterns, make decisions, solve problems, whatever it happens to be. Within that scope is machine learning. Right?

So this is a little bit of a subset. We're using predictive model, and, you know, I I liken it to, you know, safety risks based on historical data. We'll talk a little bit about that later. But it's it's really a subset of what you can do in AI as a whole.

Generative AI is is a subset there therein of machine learning. And that's the one that really has gotten people excited because of, you know, ChatGPT made it It put it in the hands of, like, consumers, you know, like, people like us who who aren't building ML models all the time. And all of a sudden, we were typing in questions, and it was saying these really interesting things. We said, wow.

Like, now we're starting to get it. And it was it was a slightly different thing. Like, that's a that's a large language model. Right?

So these are kind of the concepts, and and they're different. ML is different than GenAI, but they're in the same group. They're they're they're nested within each other.

Don't worry. It gets more complicated. I am gonna try not to throw around a bunch of these terms. I think they're entirely helpful sometimes. But the the ones I will throw around, I've talked a lot about, Jenny.

Foundation models. So foundation models are these, you know, within an LLM. An LLM is a large language model. That's ChatGPT.

It's something that has been trained on language, on human language, on the Internet, essentially. And it's taking all of that text, and and it's still doing predictive modeling. It's still still doing sort of statistic predictive modeling, but it's doing it based on sort of human context and and and it embeds it with human, sort of like, like, it tries to create almost synonyms out of everything that you're doing. Right?
 
So it's trying to trying to create mathematical weights on the probability of a person saying something. My point is only that think of a large language model as interacting with language, with human language, and it's nested within these other sort of, bigger concepts like natural language processing. Right?

Large language model. These things are nest of each other. But talk about foundation models a little bit, I guess, which is when I say that, you can think of, like, ChatGPT four point o is is a foundation model. Right?

Anthropic has you might hear about Claude Claude three point five or three point seven. That that's a foundation model.

Llama, nova, like, choose your thing, you know, a Gemini. So that that's like a foundation model, and and it's an LLM. It's a foundation model. It can do a lot of these these tools and techniques. So so that's what you can be thinking about as we kinda go through some of these.

The next one, just to kinda pause on again, this one's from Forbes, and and it it's really just talking about, like, how people are using these tools. And you can see that the top one there, not surprisingly, is customer service.

And I think that is because of the approachability of these large language models, things like ChatGPT, where you can talk like a human to it and it can give you human like responses, not just sort of statistical responses, not just like reflections of your own data. These are these are sort of feel more human to us in their in their, prediction of what should be said. And that's really great for customer service. A lot of the times people just have a question or there's a workflow they need to complete.

I would say for the customer service one, there's so many off the shelf tools. I'm not gonna be talking about that. Obviously, we're talking about and S, but, it's a it's a it's really interesting to understand. And I think the other reason I like to show this is because we're not seeing it in and s.

And, obviously, that's, like, close to my heart. I I work with a lot of those professionals, and they think we've gotta be able to derive some value in this. Like, if if we're we really want to get the most out of, this kind of really interesting new technology that has become more and more accessible, then I want to be thinking about EHS. Like let's make people safer.

So, so I want it to be on this list. And that's part of this discussion, right? To encourage some of that with the experts like you.

Okay. So we're going to dive into the three keys to getting the most out of GenAI. These are just like the three, if you're going to start an AI project, this is my recommendation on how to even just think about that.

So the first one is start with purpose. And and I think, you know, I can be a victim of this sometimes too, where I want to apply a technology, and then I start looking around for ways to do that. And and, you know, as a sort of a a technology person, I work with a lot of a lot of really smart technology people that can be my bias. But I always have to remind to turn that on its head to go, like, what's the problem?

Like, other than trying to think of, here's the technology, where can I put it? I think it's much easier for everybody to think about, like, what are my problems? Like, what is not going well? What is a place that I think should be more efficient?

What are the parts of my job that I don't wanna be doing or I don't think I would be the best at doing? Think about and and I would write these things down and and not just to be complaining to yourself, but I think, you know, I put a put a few examples here. It's like, what what tasks are incredibly repetitive to you? Where you think, like, why am I wasting my time doing this thing, this clicking exercise, this sort of reading copy pasting exercise?

Or what is what is varied data heavy? Where I've got these big spreadsheets and I'm, like, trying to do these pivot tables and and look through them and really divine some insights. And And what's time consuming and then what's prone to error. Everyone's really worried about Gen AI tools having hallucinations.

Very good thing to be worried about. But I think we should also look at, like, where humans are prone to error as well. We're actually not good at doing a lot of the things that a computer is good at. And so, so think about the things that are prone to error based on, based on your behavior. And those might be the problems where you can kind of like dig into, okay, maybe there's some technology there that can help.

The next is know your data. This one again, I I think of as starting so if your first list is, like, where do we have challenges, what things do I wanna fix? Your second list is, like, what data do first of all, data you have access to. Think of the stuff you already have access to, you already know about. Try and make a list of that.

And the interesting thing and I'm not just talking about numerical data. I'm not just talking about, like, a pump readout, a sensor reading. I'm talking about the data that you have available and with it with GenAI, with these large language models, I mean, data like in spreadsheets. I mean, like, written notes on a piece of paper.

I mean, like, the, the, your, content management system, your health and safety system, areas where you've collected even very human and written data, things where it takes a bit of context to understand that has become a data source now. It's become a place where you can get really good insights. So so really think about those. So I give a list of them here, and some of these are easy.
 
Some of them are harder. Safety management systems, like, really good for this kind of thing. IoT data is awesome. I'm biased, obviously, we do that, but, like, the data can be pretty clean and pretty well labeled, pretty easy to store, pretty easy to query.

So that's a really good place to start.

There's a lot to be done with cameras, flame detection, you know, things like that, where you have a camera and it can tell you if that person has a hard hat on or not. A little more expensive, a little more specialized. I can't tell you how to do that. I just know you can buy them.

That becomes a data source, put it on the list. And then also just think about data that's not even yours, but data that you need access to. So this is like regulatory databases. We're gonna talk a little bit about that.

And then, you know, like operational data. So this could be stuff that that maybe you don't have access to right now, but your ops team does or your business does as a whole.

Once you know what you have access to, you can, you know, think about what your what your business has and then consider those like, we live in sort of a day and age where you can work with your IT team and work with your ops team. It may not always be easy at the beginning to get the data that you want, but we're it it is if you can apply that to an and S outcome, you know, you I won't say you can get that data. It is gonna be a struggle sometimes. Some partners you're gonna have are gonna be like, yeah.

Take this stuff, make our lives better. Some of them won't, and you're gonna have to negotiate that. But I think that data is there, and it's your business's data. It's not that person's data.

So I would encourage you to, like, kinda consider that it is part of your business, and and you have an obligation to be able to do that. But easier said than done. I appreciate that.

Next is understand the tool.

And here it's kind of you know, I'm not gonna dwell on this at times. There's lots of you can read a lot about this, and it's in the news. And, you know, is the AI gonna have robots that are gonna do your job and, you know, do we not need doctors anymore? And, like, I don't really that stuff isn't what our conversation is about today.

I think when I say understand the tool, I'm gonna make it a little more specific to say, we have learned to understand a calculator, what it can do and what it can't do. Right? And we also understand that it can give wrong answers. If I punch something in wrong, it will give me a wrong answer.

And I need to be a human and be smart enough to go, wait a second. That number times that number shouldn't have that outcome. Right? So it takes a little base.

It takes some context on my on my side and understanding what I'm trying to get at. Now could I do my job without a calculator? Absolutely not. And the fallibility of the tool is just something I need to be aware of.

Same thing with, like, spreadsheets. You know, same thing with your computer, with your cell phone. There's, you know, not perfect, but very powerful. So that's kind of what with the comfort we need to be able to start getting to, especially with Gen AI because there's a lot of fear out there.

And, and, and we and I think part of it is just getting over that. And to get over that is is being able to understand it. It is kind of using it a little bit and going, oh, that's what a hallucination is. Of course.

Like, that's okay. And there's ways to mitigate that. There's ways to make it not do that anymore. And that's the part when I talk about understanding your tool.

I think the problem you get into I put this through a magic wand on the image. This is a Gen AI image, obviously.

And, I put that on there because the only time you get in trouble is when you think it's a magic wand and it's gonna do whatever you want, and then that outcome's gonna be perfect. That's that's kinda where you get yourself in trouble.

Okay. I'm gonna take a pause for a little bit, to switch gears slightly. Before the webinar, we asked some questions. So one of them was, are you using AI as part of your EHS strategy today? And so, 79% of people said that they're they're not using it or they're just starting to explore, which is perfect because that's really what we're focusing on this this webinar for. So great.
 
But, you know, there's some of them are saying, you know, they're not not, yet using it. Yeah. Sort of starting to explore.

They're they're some of them are actively using it. So you'll see there's sort of, like, 10% there and another sort of 9% that are that are piloting it in a couple of areas too. So I think that's just as important as the 80% of people that I wanna highlight, like, hey, you're a good company. The other one is to say, hey, people are using this.

This is there's real tools to be out there. Some of it's to be piloted. Some of it is in place. There's different ways to apply it.

So you can feel comfort in being the I don't know yet. But then also I would, like, feel really encouraged. Like, you kinda wanna be part of that, like, pilot 9%, really so that even you can learn and your business can learn almost regardless of the outcome.

Okay. So next, three Gen AI applications for EHS. So here's where I took sort of those topics and I said, okay. Well, how can EHS or sorry.

How can, Gen AI sort of, be able to help in these in these contexts that I know EHS professionals, can have challenges in or can gain efficiencies in. But before I do that, this is a warnings and disclaimer slide because, as I say, you can't just jump into this stuff blindly. So the first one is you got you still have to be mindful of corporate policies. Right?

Working with, your IT team is a good idea.

When I say work with them, I mean, don't just let them dictate stuff to you. Also, it's like work with them. Let them know what you're trying to do. Like, let them kinda keep you safe in policies.

This is not a replacement for human judgment. As I talked about in the tools, it's just not I I I don't recommend that. I think there are places where that can happen, but just as part of this talk, I'm not advocating this replacement for human judgment. Especially, like, in EHS, your context and understanding of what you're trying to achieve is just so critical, and the stakes are so high.

So this isn't gonna replace you. We're hoping it just augments you and makes you better reach out. Be mindful of the hallucinations. This is when, especially at GenAI, you will get answers that appear to be very confident and, like, sound very real.

Just like when you hire a new intern who is fresh out of school and really keen and sometimes what they say isn't right despite the, you know, sort of the, emphasis they put on it. So just treat a GenAI tool just like that.

Except this this new intern, knows, you know, ten different languages and has a doctorate in, like, every degree, and that's the magic of GenAI. You still have to be mindful of what it is that it's giving you back.

Yeah. Just I talk a lot about tools here and chat GPT because it's the, you know, the the, it's the Kleenex or rollerblades of Gen AI tools. But, but don't give your your private or corporate data to, like, a public Gen AI tool. You know?

I just that's my advice. It's even the New York Times has a big, you know, sort of lawsuit going on with, with OpenAI to try and get their prompts or, like, people's prompts. So it's just like, I would just, you know, maybe don't do that. Just just kinda, use it in a way that you would, sort of you would read it or something.

That's being overly cautious perhaps, but that's my recommendation. And AI requires quality training data. Of course, so if your data is bad, your out output's bad, you can't really blame the Gen AI model for that. Sometimes you gotta dig in, find out why there's a problem.

Oh, and I I referenced software tools here. I'm not endorsing them. And some of these, I haven't used. I'll try and try and let you know the ones we've used and the ones we haven't, but just, like, in general, I'm not endorsing these. I don't get paid by these people, although that would be great.

But my last thing, the biggest thing that's in red is, like, don't be intimidated by this. Like, yes. These are warnings and disclaimers. Don't be intimidated that that try your best to get out there and use them. That's really what my talk is about. I'm just saying this to kinda keep us all out of trouble.

Compliance and regulation is a big part of an EHS professional's job generally. And and the challenge is you're staying up to date with these evolving regulations. And depending on the complexity of your business, you can have types of regulations. You can have them in different countries.

You you have them in different countries, different regions, within a country, and you can have them on different topics. So it just it gets to be a lot. And, the, this is so that's sort of the problem statement. And and, you know, how GenAI can help is, as I say, these what these large language models, these GenAI language models are good at doing is, call it reading documents and being able to understand context.

Right? And, and then be able to help you sort of break that down and do what you need to do. So you can have your own prompts to be able to get the input you need. So if you think about the tool being able to basically read those documents and summarize them for you, there's some cool things that you can do.

So, a lot of the easy ones that that this is this is kind of a good place to start because you can do this today if you have, access to a a GenAI tool or or I'd say you do have access to a GenAI tool or questions, which one? And, so one of them is just, like, summarize the key safety policy changes in this in this PDF because they can be huge documents, and you just wanna go, hey. Summarize this for me. And, you know, hey.

You can upload the updated, sort of policy and the previous policy and say, tell me the difference between these two. Like, what's changed between these two documents? And it's not just doing a word for word, line for line. This used to have a bullet point, now it doesn't.

It's reading I won't say understanding, but it is reading those two documents and giving you the gist of what has changed. Right? And, yeah. So you can gen you can even generate comparison tables too.

Say, well, what's the difference between this thing and this thing? Because they they can be huge.

And then to go further, like, semi-automation is also possible. So I would say you can start with that first thing and gain comfort, work on your own prompting, asking it questions, seeing what results you're getting. But then sending automation is possible. Like, you can set up AI agents that will scrape regulatory databases, like, depending on how you get that that information, but you can, you know, scrape the Internet or be able to get those documents.

And it can do that on a schedule even.

And then, you know, with those well structured inputs, so I'm talking here about some tools like Zapper and Make and n eight n. Those are all tools that are like agentic workflows. So it's a workflow like, I don't know, Power Automate or something. But but what you can do is you can put GenAI tools, these large language models within that. So you can say, go get these documents on a on a schedule.

When you get the documents, I want you to sort of read them, and then I want you to create me a summary and send me an email. If there's no updates, send me an email that says no updates. If there are updates, give me a summary of the updates and tell me what you think I should go read from those documents. Very doable thing today.

It takes a little bit of work, but I would suggest that that work in practice is gonna help understand how those those kinda, like, tools and workflows. And I think we're gonna see like, right now, these are Zapr, Make, and eight to end. I there's big companies that are working on this. I think I would be really surprised if, like, next year, you know, Microsoft didn't have that.

They already have, Copilot Studio Copilot Studio. That's where you're kind of like you're taking their foundation model, and you're training it, teaching it to respond to you in the way that you wanna be responded with the data that you would like it to have context on. So Microsoft already does that part of it today. The rest of it, I'm talking about workflow, but that's a little more advanced.

I think you can start with that. The other thing I'd say with with regulatory, the other, like, caveat here is that, again, if you don't trust the intern, if that document is so important to your business and your career, like, read it, please. You know, like, the summary is if you have a big document, you kinda understand what's happening and you just need a summary, you need a PowerPoint, you need to explain it to other people. Jenny and I can do such a good job of summarizing that, putting it into a PowerPoint, putting it into a summarized Word document, an executive description, the comparing it against policies, all great.

But if the stakes are high, like get yourself in there.

Or there's, another company, that, I was lucky enough to meet the CEO of Verano AI, just unrelated. I happen to be talking about this presentation. He's like, oh, that's what my company does. And really smart guy, but you can get these tools off the shelf and then you can save yourself a little bit of this.

Like, am I gonna build it? What do I have to understand? What foundation model is like, again, as I said, you're not trying to send this data to these other companies to do what they need to do. You don't have to send your policies there.

What Verano does is help you sort of like it'll run its own models within, I think, AWS.

But, you can run it on your own stack and then and then be able to, automate this compliance. And it and it does it across different areas of your business. So it doesn't, have to be AHS, but, but that's one of the places. So my point there is, like, whether it's Verano AI or not is that there are tools and understanding what you're trying to achieve, those first three things I talked about, are are gonna help you even pick the tool you want, like, off the shelf and implement it.

Data analysis and reporting. This is a big one. So if you think about and s, there is often already tons of data. People want that for ESG reports. People people want that for, you know, their their own reporting based on what their, like, different teams are doing, different initiatives that they have, or you're just trying to kinda tell a story to someone to say, this is really unsafe, and here's why we have a policy. So so you're you're sort of building these reports frequently that and and they're often nonstandard because they have to do with the problem that you're looking at that day or somebody asks for some data.

And what you can do is, you know, you can, this manual reporting sort of slows everything down. If you need to curate that report, if you have to go into Power BI or get a data team to build it for you, you can slow it down. So what you can actually do is use and I'll show this in a black line safety example as we get down to then. But, essentially, you can you can get a tool and you can ask it a question. You can say, well, how many alerts do I have last month compared to this month? Or how many alerts did that team have compared to that team? Or, you know, compare this type of event to that type of event.

If the data is there, what this GenAI layer can do is it will go look at that data, and then it will make you a report. It will make you a graph, chart, you know, whatever, you know, like a time series, like a a trend over time, and it will show you what what you wanna know. And you can ask it questions and you can tweak it, and then you don't have to do the SQL query. You don't have to get your BI team, you but it'll make you that visualized report.

I'll show you an example. And a couple of tools here, like, we're, exploring this with, Amazon Q for QuickSight. So Amazon Q is sort of like their, GenAI suite of tools. QuickSight is their reporting tool.

And then Power BI in conjunction with Copilot, has some of these things as well. And, we we're sort of just exploring those. We've gotten pretty far with, QuickSight.

We know it's capable or there will be capabilities in Power BI. And then I think Tableau does this too. I think Tableau has Tableau GPT. Never used it. We're not Tableau crew here, but, but that's another one. If you're using it, I think you have this ability to be able to set your data structure up and your tools up to just ask the questions, build your reports.

Next one is risk assessment. This one's like the stakes get higher as as we go. This one is you're trying to say we collect a dot a lot of data and a lot of EHS data that gets collected ultimately is to be able to mitigate risk. It's to be able to understand what's happening in the field, understand risk, and then be able to mitigate that, ideally stop it altogether or come up with policies that are going to really help you there.

So, where GenAI can help is it can scan these like, large volumes of data. Right? And, and then be able to give you contextual insights. It'll be able you can ask it questions.

It can tell you things.

The other thing you can do with a bit of work, my next slide will talk about how to do this, but you can cluster incidents. Like, it can do that for you. It can it can sort of say, hey. You're having a lot of these types of things over here or this type of event in this area, or there might be a correlation between these two things with these huge datasets.

Right? And and you don't necessarily need to get a data scientist to do that for you. They're probably gonna be better at it, but it's gonna take longer and there's gonna be more time involved. And it has to be a very specific project as opposed to like a, hey.

I wonder if.

And, so, yes, still talking about risk assessment here.

What I would suggest in getting started with this is, like, start with a small dataset you understand first so that you know that those insights are, like, relevant. Start with, like, small amounts of data that you can wrap your own human brain around before you sort of give it the whole treasure trove.

And then there's, you know, there's GenAI tools for pattern recognition. Again, ChatGp Pro can do this and Cloud and Perplexa AI AI. Again, be mindful of, like, sending this data anywhere that isn't within your stack. So, you know, there there's more there.

I'm not proposing you should just be sending, health and safety data outside of your company. I'm not saying that. But I think you can get these tools to work in house. Like, tools like AWS Bedrock is gonna let you run some of these, especially things like Cloud within your own sort of tenancy.

Right?

And then, you know, future potential is to build a real time risk model. I'll talk a little bit about what we're trying to do at Blackline.

This is one where my, like, caveat is. I'm saying a lot of these cool things, like, I'll do risk prediction. You pump in your data. It's gonna draw these parallels.

It's actually you can totally do that, and I'll paint a picture of how you do that. It's just that this one gets a little more complicated. Some of the other ones I'm talking about are, like, throw in a document, get some insights, make a PowerPoint presentation.

This one is, you know, there's a few steps to it. In simple terms, there's, like, three steps. You need to do document ingestion and processing. So it can process those PDFs, Word files, Excel sheets, form entries, and you'll hear a term that's RAG. They call it RAG RAG system.

And RAG is retrieval augmented generation. So this is really important for any of the, Gen AI, large language model stuff. I'll simplify a, a RAG system to just say it's a place where or a technique for putting in relevant data. So in the case of risk predictions, then, you know, maybe that's where you put all your incident reports.

Probably more applicable to the previous, example of, certifications and standards. Put in your certifications and standards. It doesn't just store it. It actually like breaks that down.

It creates chunks or tokens, and then it embeds them, which really means that, like, turns them into, like, multidimensional vectors, which gives it a, which which helps helps the GenAI system sort of search based on, like context. Right? It's not just a word search. It's not what it does.

It searches based on, like, statistical relevance to the question that you had and, you know, using complex math, you know, using linear algebra. So, so that's what a rag model is. Basically, you store some stuff in there and your Gen AI model can, like, look at that and reference it. The next is, like, embedding in vector store.

This is like it's not enough to just to just store it in that place. Is you'd have a vector store, and I mentioned sort of, you know, creating these multidimensional, sort of vectors. And the idea is you need a really quick way to search that because if you're doing anything big, like, if you're trying to do a, a clustering, algorithm on top of this huge amount of data, you're gonna have to set up those vector stores, and there's ways to do that as well. The last is you need to do a clustering intent analysis.

This is, like, totally data engineer, machine learning engineer territory or sort of data scientist machine learning. It's like, I I I'm not even gonna try and explain a k means, k means or a DBSCAN because I can't. My point is only to say that there's even different ways to cluster. By clustering, I mean, it can look at patterns to say, well, these are the same as these.

So, like, let's talk about these as being, really prevalent within your data and how they might even relate to one another.

So so that one gets a bit complicated, and I'll I'll show you how we're gonna, apply it.

Okay. Pause again.

Pause for context switching or for, yeah, a little bit of context switching. Talk about the audience a little bit. People that are on the call is we asked, you know, what's your biggest challenge when, what's your biggest challenge when it comes to using AI? And, you know, 50% of people said they don't know where to start. My previous slide is getting into like, oh, yeah. You know, you're gonna need a vector store, and you're gonna have to do some k means and things like this. I'm like, that's not that's not where you need to start, not really the point.

I'm I'm really just highlighting that there's an order of complexity here, and you can get pretty deep, but you don't have to get that deep. So I think where a lot of people and this is where I kinda started too, was just said, well, I don't know where to start. I think these are really cool tools. I think I know what I'm trying to achieve a little bit, but, like, what what am I supposed to do? And it it took some time to get there.

And, so, yeah, don't know where to start.

A lot of people will kinda stop at the data readiness and and quality. And and, I mean, yes, you gotta stop there. At least you gotta stop there to get really great valuable outcomes. I would say you don't don't stop there to get started.

Acknowledge the fact that you don't have that particular thing. Do your best to get that data even if it's sloppy, I guess. But your outcomes are gonna be bad. Like, you so so what I'm recommending is a bit weird because I'm saying, like, but you should get started.

Try like, get started so that you understand so that you can get the data that you need to answer your questions, right, such as my, my recommendation. And then regulatory concerns, privacy concerns, budget constraints. These are like the bread and butter of why we can't get started with projects we're passionate about at any business level. Right?

And, so so those ones are just I mean, almost every project, those three, those 8%, 8%, 6, those are gonna stifle you. I don't have I don't have great advice on that. I think with regulatory concerns, it's up to people like us to sort of prove why we're doing the right thing with the GenAI tool. People might get freaked out.

You can say, well, here's where I'm storing the data. Here's where it gets processed. I already have access to these reports, but we're not really doing anything here that's gonna break policy. And I think you wanna try and engage in those conversations.

Okay. Last three is, Blackline Safety AI projects. So these are not all GenAI projects, but this one is. Customer reporting.

I mentioned this a bit earlier in the, reporting and analytics section. So what we're doing at Blackline is, you know, our customers have access to you know, when you when you get in the door at Blackline, you you sort of care about safety and your data forward. Right? We're collecting a bunch of this data, and it's up to you to try and use it in an interesting way that can help your business and your employees.

We have fifteen plus sort of standard reports that just kinda come out of the box and go, here are your reports, fill your boots.

And there's three email standard reports as well. What we almost immediately found is that, that's not enough for people, is they have specific questions. They have specific problems they're trying to solve. They wanna use the data in these really interesting ways.

And so we weren't able to build all these reports. And then the other challenge is as we get ten reports, fifteen reports, we're already trying to build more reports to solve these problems. What we find is that there will just never be enough because people need to solve a specific problem. Right?

So we have these fifteen reports so that people can understand what is possible, the data that we have at our fingertips, and the sort of insights that we are but tried and true for Blackline and Blackline customers. But people need to ask questions. So we're using GenAI, and right now we're we're partnering with, Amazon Q for QuickSight. They've been really great about kind of supporting us as a business.

And they're, they're helping us do POCs to be able to say, okay, listen, here's your dataset. You have this like big wealth of data, right? You got the big data lake. And if we can get a GenAI tool, that's you know, you're already reported with this data.

So get the GenAI tool running in your own stack to be able to look at this and then generate reports for people so that a customer can come in and say, show me the alerts between June twelfth and June thirtieth. And, and the data can stay where it needs to stay. We have still full control over our customers' data, and we can kinda keep that secure. But they can still answer these GenAI.

They can get the power of GenAI to just build so when their boss says like, hey. You know, how many h two s alerts did you have for this particular group over here? You're not clicking through reports. You just type it in, and it's gonna show you that that answer.

Again, we also use, Power BI here, and so I think there's a Power BI Copilot. It's been a little less, like, intuitive for us on how to apply that, but I think it's gonna get there. I think Microsoft is gonna be excellent at that too. Like, between AWS and x Microsoft, you can't really go wrong.

AWS is just a little more cutting edge in that their developer first kind of mentality.

So we hope to be able to show this to customers soon.

Next is sensor forecasting, not Gen AI at all. This is like classic data science initiatives. We've done some POCs in this in the past. The tools have gotten a lot better.

Our, ability to collect this data has gotten better. I talk about this in just because a lot of people have sensors or things where they're trying to be predictive, and, there's a real value here in just kinda going, I think, I guess, back to basics on on data science and ML engineering. It's just that there's a huge amount of value. Like, the way that our devices work, you know, is you you have a this is basically a gas monitor.

It's IoT connected to, to a network, so we can collect the data there. Your company's gonna have access to their own data. Within here is a sensor. It's just a gas sensor, and they're relatively commoditized depending on who you ask on the subject, but you can buy them from many different places.

So we wanna buy the best ones. And they have a lifespan. They only last for a certain amount of time, And how long that lasts is a factor of of, you know, many different inputs, especially temperature, especially humidity is really big one for sensors, but then also like the number of exposures and the frequency of exposures, the duration of exposures. So all of that, what we're trying to do is forecast when that sensor is gonna go because we know it's gonna go.

And if we can say that before a person starts to shift, or before it starts to fail a calibration, that's gonna really help customers. It's gonna help us. So this is this is an example of where you can kind of be able to take those. You have the data in the past on how sensors operate, and you wanna be able to forecast into the future and how they're when they're going to fail or how they're gonna perform.

The last one here, I I talked about risk prediction a number of times because that's something that to us is kinda like the holy grail. We're really good at showing people where, like, gas exposures are happening, where falls are happening, where people are, you know, calling for help, and, e even gas exposures that are sort of, nonzero. They're below the low threshold.

We wanna be able to use that data and show those people. And we've done a POC with one of our sort of best clients a few years ago where we made, a predictive model just for them. And so now looking at these these new tools, we're working with, Amy, which is the Alberta Machine Intelligence Institute out of Edmonton, really great partner. And they're helping us sort of tackle this kind of big this big problem set that we have.

And this this really isn't Gen AI. I think there's gonna be some Gen AI outputs, the ability to ask it questions and understand. But a lot of this is like good old fashioned machine learning where you're really trying to be predictive about your the datasets you have. So, and so this stuff is very possible.

And I wanna wanna kinda highlight how where you can go. Because from an EHS standpoint, if you can tell where your risk is the highest, if you can see a probability of something happening that shouldn't happen, like, that's what we're all trying to do here. We don't want bad things to happen. And so so using AI tools across the board, if it can help us do that, I mean, that's what we're all kinda getting up in the morning for.

And this is what Blackline is is really trying to, you know, I don't know, democratize for all of our customers. Like, we want this type of thing to be available with our data, and I think you can you can layer data onto this. So it's pretty exciting.

Key takeaways. Your domain expertise is key. There are lots of tools out there. The trick is, like, knowing how and when to apply them is up to you.

It's so very much a human operated toolset and that's where we should be focused on. And the more you know about your vocation, the better you're gonna be. And having just the data scientists with ML tools sitting in a room by themselves, you're not gonna solve anything. You wanna partner that person up with the people who really understand context, and that's think largely of people on the call.

Work with partners. I mentioned a couple commercially available solutions, and they're dying to sell you stuff and work with you. And the trick is finding the right partners, finding ones that are very engaged, but, but, I would say you don't have to do this all yourself. Like, you should be an expert in and s. Don't expect to be an expert in AI, but, like, work with people who are so that you can apply your domain expertise.

Next is start playing. Again, just, like, find safe ways to muck around with these tools. It's the classic, like, can you get two hours out of your schedule in a week to try and do this? I really encourage you to do it, because you learn a lot even from all of your sort of failures. Even just watching YouTube videos on how that tool works, you go, oh, that's possible. I didn't really think about it that way, and that and that's hugely valuable.

And then as you're doing things, like as you're mucking around with ChatGPT or Copilot, is work on your prompt engineering. You the better you get at asking questions and interacting with these tools, the the better your responses get. And prompt engineering for a person like me is just, you know, you're you're prompting. You're getting questions.

Prompt engineering is actually when oh, click on connection. I'm back. When you're using a foundation model and you're training that foundation model, you're adding a layer of context on it, you can teach it prompt engineering. You can teach it how to operate better under the circumstances that you need.

And so practicing that is really important too.

And I think we did it. Pretty pretty good on time. I think we have some time for some questions if Darcy wants to come back and give me a hand. Thanks, everyone.

Yeah. Thanks, Phil. That's a ton of really good information. So we've got a good set of questions already coming in. But, yeah, feel free to continue to ask them. Just pop them down in that questions button on the bottom right.

But while you continue to ask questions, we'll jump right in.

So this one kinda goes back right to the beginning to that data maturity curve slide.

The question is: based on industry studies, what are the fears or reasons for 8% not considering generative AI?

I can only give you my opinion on that.

Sorry. Having I think I'm back. I had some Internet issue there. But I can only give you my opinion on that, although although I believe it's probably deeper in that article.

But, anyways, I think the not considering often is people are too busy. It's like they're kinda doing their day job, and then they're like, well, these tools will become available. Someone's gonna ask me to do that. You know?

And that you're not entirely wrong, but it's I think with the way things are changing, the key that I'm trying to push here is to, like, understand it. Even if you're too busy, try your best to understand it because, like, imagine being the person in your workforce who didn't, really get Excel, didn't really get spreadsheets that much. And when you use them, you were like and I've seen this before. Like, worked with people who they'll put all the values into column into a column in Excel, and then they'll use a calculator and add them up and then like, on a physical and then they'll type that value into the bottom.

And when you see that, you go, okay. You're using you're you're technically using the tool, but you're not getting the value out of it. Right? So so that's the thing.

But I would just say people are really busy. We all have our own challenges and hills to die on at our office. So I think the 8% is just we're overwhelmed, and now there's another thing that we're supposed to consider.

Awesome. Okay. Next one. I've met had some resistance. What's your advice for leaders who are hesitant to adopt AI?

Yeah.

I think as I I've tried to kinda talk about that a bit within this, but I think I'll answer it directly to say we need to be really clear about what our fears are. Right? And we need to compartmentalize those fears. I think one of the biggest fears is, data security, and I think that's probably one of the most real fears. So, like, write that down, data security.

The trick is you can utilize AI tools and GenAI without having your data leave your call it leave your company because we're all more or less cloud based now, so leave your company is a relative term. But it's, to to be able to leave your company. So you can do that. There are ways to operate these tools within your own toolset.

And, you know, again, we're an AWS company, so I can speak a little bit about, AWS Bedrock.

If you use Google, it's like they use Google Agent Garden.

And Okay.

Security is like job number one with Bedrock. And so what you can do is you sort of think of it as like you're getting a foundation model, and now you've got it and it's safe and your data is not going to other people, and then you can build your your your sort of layers of context around that. You can train that model to do what you want. So lean on your providers like AWS and go, hey. I'm having problems with this.

Tell me what you have to do about security so that I can tell the rest of my business that take them back. And, work with your IT team. Go, what are you afraid of? And and really address those topics because the devil's in the detail. But you what you don't wanna do is go into meetings and say, I'm using a new Gen AI tool and not be prepared for people to go, our data's going off premises. Like, this is a bad thing.

And be prepared also for this the hallucinations talk. And I would, again, I would ground it in a little bit like, if I asked so and so at my office, am I expecting them to give me the right answer 100% of the time every time? Probably not. So let's look at the tool realistically and think about improving hallucinations, not not using a tool because it might give you the wrong answer. So those are my little bit little bit of platitudes there, but I would say really address those things head on. Don't ignore them. Don't gloss over them.

So kind of a good segue to our next question.

So along that, data privacy and security concerns, but more specifically, does my company's private information remain confidential and secure with ChatGPT paid services?

That's a tough one.

I my advice is pretty conservative. Right? So we work at a safety company. So I would say, I do not recommend to anybody, even on a pro, a ChatGPT pro account, to be just giving your data to that. Now I think there's lots of explanations ChatGPT can give you on why that's okay, and I think that's great.

I wouldn't do that. I would find ways to be able to either run those models within your own, organization's stack or within its own structure.

Again, that's why, you know, sort of we partner with AWS for that. I think there's lots of things you can totally ask ChatTpT. I think if you can anonymize or synthesize data to be able to do that. I just don't recommend for people to to use ChatGPT Pro, take a bunch of private corporate data, and put them in there.

And maybe that's me being kinda late to the game and a little more cautious, but, like, we're talking about and s things here. I think we wanna on the side of caution. So, no, I don't necessarily recommend that. I think what you wanna be able to do is get a tool, either have a service agreement with that tool, where it's going to, sort of acquiesce to those demands with that data being deleted right away or, which is harder depending on your prompt.

But, like, get right into the weeds on that one. I think if you're trying to do these bigger, larger project asking and LLM questions, Cool.

But, but you really don't want your data to go to a place you don't know about. And that's where Copilot becomes really valuable. Like, a lot of us here are Microsoft people. And look at those policies, but Copilot for business is pretty tight.

You know? Like, you can do a lot of this, these kind of things within the Microsoft stack that your your BI team is gonna be much more comfortable with. So so I guess my short answer is no. And maybe I'm a bit wrong on that, but I think let's on the side of being conservative.

Great. Thanks.

And you kinda touched on this a couple slides ago, but, I think it really comes down to the the end of this question. But what can we use AI for when it comes to analyzing risk using black line data specifically? Items we store are usage, compliance, and alert data. And, hey, this is really the key. How can we leverage this to promote field safety?

Well, yeah. Okay. So this is about Blackline data specifically, Darcy?

Yeah.

Yeah. Okay. Yeah. That's the key. I think for us, what, where I really think it's valuable to leveraging field safety is, when you can we've seen a ton of power in when we have a customer and they're using our existing analytics.

This is just old school data, you know, data reports, is when they can say, oh my goodness. This team has, much many more gas alerts than this team. And that's strange because they're the same functional type of unit. Right?

And then they can dig into that and figure out, like, oh, jeez. It's actually three people, one person in that group who's who's doing oh, it's always happening here at this time of the day. Then they can go talk to that person and go, what's going on? Like, what you're doing is fundamentally different.

And they have hundreds, thousands of people in all different areas of the world. And if you're able to hone in on that one person and you're able to say, listen. You're what? What are you doing?

So I'm doing it this way. You say, don't do it that way. Here's the protocol. Do it this way, and we're gonna see those alerts go down.

That's so powerful for us because they have a con they do some clicking around. They have a conversation or two. They can fix the behavior and those stats go up, and then they can also show people how the stats go up. Right?

They can say, hey. This team changed their behavior. Now they're doing this stuff. This we're all safer.

That's a lot of work. You know? That's like, an EHS person has to be in there, and I think we provide that, but it's also a lot of work. So that's where we see the biggest value is being able to hand that insight over to somebody before they have to dig through and find it.

I think if we can unlock that at Blackline where we can say, here's the risk and the insight, deal with that to make your your company a better company. And they might look at that and go, yeah, it's not a problem. Like that team's supposed to have more alerts. They do this particular activity that that requires that type of thing or they're we have other protocols to deal with it by all means.

But I think that's where we wanna be. We we don't wanna put the onus just on the HNS professional to be able to utilize those reports. We also wanna be able to serve it up for them.

Right. Okay. Next question. How will the predictive risk tools change how we respond to safety threats?

On the same day.

But Yeah.

It is along the same rate. And I and I think that's I think that's it is you shouldn't spend as much time digging through data.

You should be able to get glean insights.

And then I think as we all know, the bigger part of your job isn't just discovering that. It's deciding what to do about that. How do you approach that person? How do you approach that team and that manager?

How do you do it in, like, a collaborative way so that so that, you're really trying to fix the problem? And so I think it's gonna shift the way that we deal with these issues as opposed to going to someone and say, you have a lot of alerts. Stop it. You know?

And no one here really does that, but in a in a sort of really basic way is is how do you approach that? Because you don't have time. You're you're issuing out emails that say, here's your stats. Do a better job.

I think as you get these generative I generative AI insights, things that can be a little more actionable and a little more kind of contextually relevant as you work on them over time. I think the conversation changes into, like, how do we fix that? And let's be more collaborative. That's really what and S professionals wanna be.

They don't wanna be the bad guy who's telling you to be less efficient. What they wanna be able to do is just be, like, understand what's happening in the field and understand how we can help you get better. And if these insights can automate that both for the EHS professional and also the person in the field in, like, a real human language communication style, we're probably gonna see a lot of those benefits. And I think it's just gonna save time and it's gonna help people communicate.

It doesn't come out of the box. You don't just get it yet, but I think that's really, like, the North Star for using these tools.

Perfect.

Looks like we have time for probably one more, maybe two.

How can we ensure GenAI interpretations of new regulations are accurate?

Yeah.

As I so that particular Hopefully, he'll be back with us in a few seconds. I think I'm back, Percy.

Okay. Good. Good. Okay.

Regulations and GenAI and, you know, can you trust it as its accuracy? It depends how high the stakes are. Here's an example. Let's pretend you went to a conference.

You're pretty tied into what's going on. You know that there's gonna be these regulatory changes. You're usually not surprised. You know?

You know that they're gonna happen. There's a lot of talk about it, and you kinda get the gist of what it's gonna be. And, let's say you go to some conference and they go, here are the things that change. You go, great.

But then when it gets published, there's this huge mountain of document. Now you know for your business that there isn't a huge risk there. You know, you you kinda don't know exactly how it's gonna apply to all these things, but, like, stakes are low because you have context. Your domain expertise is key.

That's when using a Gen AI tool is gonna be perfect because you can say, summarize this. Please make an executive, presentation on the changes that is relevant to the, C suite. Make one for my manager. That's gonna be a little more detailed.

Now make make one so that I can get my managers in the field to understand this policy. You know? Write it up at this grade level so that we all understand what it is we're talking about, simple terms. It will do that for you.

You don't have to write all those documents. You have to be clever enough to understand contextually something clever enough. You have to be contextually, educated enough to be able to understand that these are the things. But save that time.
 
You don't need to write those. You don't need to regurgitate those. And I think the other thing is when you're comparing those two documents, if you say, well, compare this compare this. Again, how high are the stakes?

Like, if if there's a a big difference between these documents, it's gonna flag that there's a big difference, and maybe that's enough for you to be able to read about those specific pieces. So what I'm saying here is like, the onus is still on you to get it right, but use these tools to your advantage to to simplify your workflow, to flag things. Even if you you read both documents yourself as a human being, I think we can agree there is a probability that you will miss a change as well. So when you when you when you add a GenAI tool to your workflow, you're you might read the documents and then ask it to compare.

And then you go, yep. That was we both agree that that this is it. That's where it really I'm not saying you can abstract yourself from the responsibility you have. I'm saying that this tool can help you in so many ways.

Awesome. Thanks. And we are right at time. But before we sign off, if you've missed them in the link section, here are a couple articles that, Phil has written recently if you want a little bit more information.
 
So please take some time and check those out
 
And as mentioned at the start of this webinar, we will be sending recording and PDF of the slides. A lot of great information in there, so it'll give you a little more time to digest it all.

So thank you all for joining. It was a great way to spend the morning, afternoon, evening depending I know it's kinda all over for the people that attended.

So, yeah, until next time. Stay safe, everyone. Thank you very much!

17 JUNE 2025

UPCOMING WEBINAR

At a time where public safety is more critical than ever, securing the right funding for emergency response and public safety projects has become highly competitive. Join our panel of experts as we explore the latest federal, state, and foundational grant opportunities tailored for emergency responders and public safety professionals. This webinar is designed to equip fire departments, CBRN teams, and homeland security personnel with the insights and strategies needed to successfully navigate the funding landscape.

You’ll learn:

  • Discover key funding opportunities to expand your department’s capabilities.
  • Learn how to navigate complex grant requirements with confidence.
  • Understand how to leverage federal, state, and foundational grants for projects that enhance public safety.
  • Gain insights into high-priority regions and public events where funding opportunities are at their peak.

MEET THE SPEAKERS

Phil Benson
Vice President, Product
Darcy White
Director, Demand Generation (Moderator)

Get in touch

Let’s start a discussion about your safety challenges and needs.