Episode 76

transformed: Developing a Comprehensive Strategy for AI

In this episode of TRANSFORMED, Patricia Patria, Chief Information Officers at Babson College, talks about the immediate learnings and actions Babson was able to take as a result of a quick-turn experiment she recently led to explore AI in higher education.

Learn how this experiment, implemented across a range of courses, resulted in defined needs, tools and policies around AI to ensure a positive experience for students as well as faculty, balancing the potential for AI with inherent risks.

Babson College CIO, female executive in business attire

 

References:

Patty Patria

Babson College

Babson College website

 

Engage with host, Joe Gottlieb, at discussion@higher.digital at any time!

Subscribe to TRANSFORMED wherever you get your podcasts to hear from higher ed leaders as soon as new episodes drop. 

Patty Patria:

So we do, we have a gen AI policy for our employees that requires employees to use closed tools. And again, for us, that’s co-pilot. When entering any Babson sensitive or proprietary data. We also don’t allow employees to enter what we call highly sensitive data. No things like socials and credit cards into gen AI tools at all right now. And from an academic perspective, we recommend Polk copilot to faculty, staff, and students and offer, of course, Azure Open AI as well. For advanced functionality, we try to educate faculty and students on open versus closed systems. But at the end of the day, we allow faculty and students to use tools as long as they’re not entering any Babson proprietary or sensitive data. ’cause There’s academic freedom and we want people to have the, the choice to use what they they wanna use again, as long as they’re not entering sensitive data.

Joe Gottlieb:

That’s Patty Patria, CIO of Babson College, summarizing part of their gen AI responsible use policy. That policy is part of a much broader strategy that began with a group of forward-thinking faculty that wanted to incorporate AI into their courses, then leveraged a quick turn experiment to test available technologies and ultimately led to a standard set of tools and user guides provided to students, faculty and staff. Patty and I talked about how this comprehensive AI strategy came about, the impacts thus far and what lies ahead for babson’s AI era. I hope you enjoy our conversation.

Joe Gottlieb:

Welcome to TRANSFORMED, a Higher Digital podcast focused on the new why’s, the new what’s and the new how’s in higher ed. In each episode, you’ll experience hosts and guests pulling for the resurgence of higher ed while identifying and discussing the best practices needed to accomplish that resurgence. Culture, strategy and tactics, planning and execution, people, process and technology. It’s all on the menu because that’s what’s required to truly transform. Hello, welcome and thanks for joining us for another special episode of TRANSFORMED. My name is Joe Gottlieb, president and CTO of Higher Digital, and today I am joined by Patty Patria, Chief Information Officer at Babson College. Patty, welcome to TRANSFORMED.

Patty Patria:

Thanks, Joe. Happy to be here. What would you like to talk about today?

Joe Gottlieb:

I’m glad you asked. I know we’re gonna get really deep into this topic of developing a comprehensive strategy for AI, but first I’d love to hear your thoughts about how your personal journey in higher ed began and how it really drives the passion you have for working in this field.

Patty Patria:

Sure. Happy to share. So my personal journey actually started in the private sector. I was started of my IT career as a customer relationship management consultant, working for a very small company. And that allowed me to do everything from pre-sales to software configuration, to development, to training. ’cause In small companies you do all of those things. So I learned to be a sort of jack of all trades at, at a young age and, and then moved on into the project management realm, director of a PMO, and then moved on after that into an information security officer before becoming a CIO. And having done all of those things was really helpful to my career. And I, I learned my, I received my MBA along the way, which was also a little bit helpful in, in the journey and the process. But you have all of the skills you need to understand your customers understand how to manage complex difficult projects and how to stay on top of trends in careers. And we now we’re at the precipice of another amazing trend with AI. So I think all of that helped me to get to where I am today.

Joe Gottlieb:

Well, I’m not surprised to hear that background because I know what we’re about to get into is reflective of what I know now is your ability to juggle those things, think about lifecycle, think about customers all the way through. So let’s jump into it. You know, I think it’s important to set the table with how different personas are thinking about AI. So when you consider your faculty staff and leadership stakeholders, how would you characterize their perspective on ai? Would you say it’s sort of polarized or ambivalent, apathetic? Are they all in? How would you characterize it?

Patty Patria:

So I would characterize higher education right now as quite polarizing. You have sort of the camps that are really afraid of AI and, and want to stay far away from it. And, and in fact, walk it on college campuses. There, there’s many of those. There are some who are really excited about it and want to start experimenting. But for Babson, I would say we are actually all in on ai. Our leadership and key faculty and staff are huge proponents of ai. And we actually have built over the course of the last several months of formal strategy around AI to help influence education in a positive way. There is some fear and change still. And, and, and we work very collaboratively with faculty and staff to try to address some of that. But there’s a lot of strong support for us moving full steam ahead with our AI endeavors in the academic realm, in the administrative realm, and most importantly in the student realm because our students are very, very interested in it.

Joe Gottlieb:

Well, that’s exciting. So how did this journey begin at Babson? Gimme a little bit of a background.

Patty Patria:

Sure. So the journey began in the summer, really of 2023. We had several faculty members who have formed what they call the generator, and there’s about six or seven from cross disciplines, anywhere from entrepreneurship to liberal arts to management, and really different unique faculty members who said, we wanna start incorporating AI into our coursework. So we partnered with them to see what services we could turn around quickly. And, and they came to us. I would say this is mid August, and the semester started at the end of August. So in two week timeframe, we turned around a solution where we leveraged Azure AI services in key courses, and we worked with maybe 10 different faculty members, 30 students per course to set up Azure AI in the classroom from an experimental perspective. Some of that experiment, you know, we learned a lot, didn’t necessarily work the way we wanted it to initially, but we would meet with the faculty, we’d get their feedback, and they said, you know, we really need the latest and greatest in I ai. We need dolly three and we need GPT-4. And at the time we were using Dolly two and GPT-3 0.5. So we took it and we evolved it and moved it on to our next steps.

Joe Gottlieb:

So that was, wow, what a way to get started a two week turn, quick turn on a, on a request from a band of professors that had assembled that says, we want to go, we want to go into this, you know, with with earnest. That’s exciting. So what were, what, what were some of the, the, what were some of the courses that were included in the experiment that sort of gave this initial experiment some texture? Like you, you must have run into some, a combination of wins and losses, right?

Patty Patria:

Yeah. So, so one of the big ones was what’s called an FME course. It’s essentially for our first year students, and it’s an immersion into entrepreneurship. And so the faculty work with students to create new products or services, and they wanted to leverage AI in those courses to help students build better products and services. And with Dolly, a lot of that could have been image generation, so how to generate an image to improve upon the product. And some of them even would take the images that they generate with AI and, you know, bring it over to a 3D printing machine and print out the product after the fact. So because again, babson’s so entrepreneurial, there’s a lot of ways that we can integrate AI into the curriculum.

Joe Gottlieb:

Awesome, awesome. Okay. So you, you had this experiment result from a quick turn as a, as a term was beginning. What were some of the learnings that you took away from that?

Patty Patria:

So I think some of the biggest learnings were one, in listening to the feedback from our students and faculty, what’s working, what’s not working, the technology was really important. They wanted the latest and greatest with GPT-4 and Dolly three and, and ease of use. The, the first initial experiment that we rolled out with Azure AI was not super easy to use. But again, this field is changing so fast. By the time that experiment was over, there was a great new technology that was available that was much easier to use. So what we did after reviewing the experiment was in spring of 2024, we actually rolled out Microsoft copilot, which had Dolly three and, and GPT-4 to all of our faculty staff and students which gave them the cutting edge tools that they needed in a very easy to use manner.

Patty Patria:

I think the other important thing, what we learned from our experiment is about open versus closed systems. And again, in summer of 2023, there wasn’t as, as much of the bad press around information being leaked from copyright to this to that. Now there’s a lot more so people are aware of it, but being able to educate our students and our faculty and staff don’t put confidential data in an open system like chat, GPT, make sure you use a closed system like the one that we’re providing for you so that your information doesn’t get leaked. Was an important part of the learning as well.

Joe Gottlieb:

And just to emphasize that point, right? So this is a matter of you’re bringing in these l well, in the case of chat, GPT and these large language models that have trained on the internet have a ton of knowledge that have already, has already been sucked in, and that’s their base. Yep. But then even as you start to prompt it, you’re sending signals and whatever, you know, it gathers, you’re accumulating new assemblies of information, at least at that moment inside of that session. And of course, the technology is absorbing it across all the users. So I think if I’ve got this right, you deployed this technology in a way that there really was a one-way valve. You were, you were, you were gaining from the training that had been done by the language model in advance of its deployment. And, and could be updated with, I would imagine, subsequent training with more voracious consumption of more content, which may or may not be running afoul of the copyright stuff. That’s the whole thing that’s going on there. Mm-Hmm, <affirmative>. But importantly, you were keeping all of the activity and results that you were pulling back proprietary to Babson that was inside of your domain. Is that, do I have that right? That

Patty Patria:

Is absolutely correct. And I think one of the important things is with a tool like copilot, which is proprietary to our domain, it’s an end user tool. So any end user can go in and enter prompts at any point in time. But again, risk and security is really important to us. So we did the research and we validated that once that prompt is entered and you close out the session, that data is deleted. So no matter what you entered is not going to be stored, not going to be used in a training model, and again, not going to be leaked into the public domain. And then the models that we control within our Azure AI environment, of course we load the data and we keep it there for our duration, but that again, stays just confidential to Babson. Nobody else can see that but us. And that, that’s a huge plus for us, obviously from a risk and security perspective.

Joe Gottlieb:

Right. And that just comes down to the way that copilot is packaged for you to be able to do that. So you, you, you part of the experiment, you sort of got your feet wet in this, in this domain, and then you realize, okay, there’s a packaging where we can deploy with that kind of sort of I like to call it a gated community or intranet effect Yes. For the technology.

Patty Patria:

Definitely.

Joe Gottlieb:

Okay. So you mentioned that there were these early adopters in the faculty side that formed this sort of generator team, which sounds really cool. Was there also was it related to a theme we see a lot in higher ed these days, which is this shift from Sage on the stage to guide on the side type coaching, where now AI is being used as this very powerful tool that collectively professors and students could be doing problem solving together and just in a, just a totally different learning environment. Is that happening at Babson?

Patty Patria:

That is definitely HAPS happening at Babson. And I would say our faculty that are involved actively in AI definitely fall under the guide on the side. Whether they’re advising students on, again, image generation, and then taking those images to turn them into products, or even from a humanities perspective in an English learning arts class where they’re, instead of you know, doing basic essay writing, they’re having students trying to generate essays within copilot or GPT and to learn about how you take the data and not just copy and paste it, but massage the data and use the data to generate ideas and use the data to help you learn and, and produce better higher quality materials. Our faculty are doing that in many of these courses with students across the board.

Joe Gottlieb:

Yeah. Particularly with English, it seems to me that the whole fur ori you know, initial over, over students writing essays, you know, using ai, you know, that all made sense. But when you start to shift to thinking about, okay, how do we write good business communications? How do we write, you know, how do we capture summaries of content in a form where, where quote unquote plagiarism mm-hmm, is less of an issue. It’s more a matter of synthesis and assembly. And so there are these use cases that actually are very well suited to AI where you, you’re, you’re actually winning, you’re doing a better job if you are, you know, leveraging what what has been done before. I would think that that would apply to sort of both the innovative spirit of Babson, but also the sort of new applied liberal arts in terms of how you got, how students are learning there.

Patty Patria:

Yeah, absolutely. And even on the administrative side of the world, there’s so many use cases for AI to help in, in normal business operations. I’m using it on a daily basis. I, I was trying to draft a business continuity plan last week and, and leveraged copilot to help me get the, the tenants of the business continuity plan off. And it probably saved me two hours. And it was fantastic. And then again, what we try to teach people is AI is not going to replace the human, it’s going to augment the human right. You still need to review the content and, and check it and change it and make sure it has what you need, but it can give you some the ideas or give you the basic tenants to save you time so that then you can use that time on other things.

Joe Gottlieb:

So I, I don’t mean to jump out of order here, but it, it just strikes me while we’re on that point, have you decided to suggest any sort of attribution protocol for when AI is being been used? And for, you know, as these use cases vary, right? In some cases you probably, you probably don’t want to use AI directly, or you do want to at attribute it to, I just, I just, I’m wondering if you’ve gotten specific on that point at all.

Patty Patria:

So as a matter of fact, we have one of our librarians has actually done a lot of focus around AI literacy, and we publish guides, which are available on our website for our students and faculty on how do you cite it if you’re using it properly, and share that you’re attributing to it. So yes, we do a lot around the AI literacy with our, our students and faculty as well.

Joe Gottlieb:

I feel like that’s gonna be very useful. And I, I imagine the what’s acceptable use, or that’s not even that right term. Like I like the AI literacy, that’s notion of what’s, what’s formal, what’s appropriate, what is esteemed to be scholarly even, right? I mean mm-Hmm. <Affirmative>, I mean we, we, we, all the rules have been set for the original sources of material, right? You, if you use something, unless you really, really re-synthesize it, you cite it, right? Like Right.

Patty Patria:

That’s correct.

Joe Gottlieb:

Yeah. But in this world, things get a little bit new and sometimes messy. Okay, true. So how about, can you articulate, do you have any formal AI policies at this point in your effort?

Patty Patria:

So we do, we have a gen AI policy for our employees that requires employees to use closed tools. And again, for us, that’s copilot when entering any Babson sensitive or proprietary data, we also don’t allow employees to enter what we call highly sensitive data. So things like socials and credit cards into Gen I tools at all right now. And from an academic perspective, we recommend Polk co-pilot to faculty, staff and students and offer, of course, Azure Open AI as well for advanced functionality. We try to educate faculty and students on open versus closed systems. But at the end of the day, we allow faculty and students to use tools as long as they’re not entering any Babson proprietary or sensitive data. ’cause There’s academic freedom and we want people to have the, the choice to use what they they wanna use again, as long as they’re not entering sensitive data.

Joe Gottlieb:

Well, I think that’s sounds very pragmatic, right? It’s, it’s sort of, okay, look, we’re all, we, we are adults here. We, it’s responsible use. So you’re educating now in the case of the social security numbers and credit cards, does copilot actively prevent that or not? I’m just, I’m just curious.

Patty Patria:

I have not seen it yet. I actually have not tested it yet. I, I doubt it does unless you can build it into some special bot that you’re, you’re building I’m sure <crosstalk>.

Joe Gottlieb:

It’s almost like a, like a DLP rule.

Patty Patria:

Yeah, you can type whatever you want in there.

Joe Gottlieb:

Okay. Well it, you gotta start with the policy and keep people informed even before you have the preventative measures, I’m sure. Yeah. Okay. So now let’s shift over to the students. How would, how would you characterize their interests in ai? What, what have you kind of sensed as the pulse there?

Patty Patria:

Our students are very interested in ai. We currently have a student AI club with more than 250 students in it. And several students who’ve actually built products or services with AI embedded in it. Our faculty in the generator did a survey of students and 94% say they are already using AI in their schoolwork. And it’s just that the, their voracious for learning and embedding AI into what they do is just really amazing. We also have a Microsoft Learn platform that’s free for all students, so they can take the latest and greatest technical courses on ai. And we’re seeing students excited about this as well because they, that they can supplement their formal education with some targeted technical skills that provides them with some mastery of competencies that incorporate AI and prepare them for the post-collegiate world. And then from an academic perspective faculty are the drivers of AI being incorporated into the curriculum and we in it partner with them to support that technology and, and support the students. And I actually have been meeting with several of our students in the AI club to understand their needs and desires and, and their just amazing students and learning incredible things from them. And that’s helping us build our strategy of what we offer to them going forward.

Joe Gottlieb:

Interesting. So how about, have you seen that group of faculty that’s a actively involved expand beyond the original generator crew?

Patty Patria:

I think it will, I think in the next the next academic year, we’re just finishing off the semester. I think in the fall we’re gonna see it expand quite a bit. We actually have plans this faculty group is gonna be doing some special training, which again, will, will help to support over the summer on AI use cases in the classroom. And again, because we have very, you know, business entrepreneurial focused faculty here, I think they’re maybe trying to incorporate this more than other places.

Joe Gottlieb:

Got it. Interesting. Okay. So how about education? I know you’re doing some interesting things to help. So I’ll ask first about students, but we can go into other, other personas as well. So how are you educating students on the responsible use of ai?

Patty Patria:

Sure. So we’ve actually created with a knowledge based article, specifically targeted to students on best practices for AI as well as responsibility for ai. I was actually at one of our AI student clubs a few weeks ago and spoke about closed versus open and responsibility and things like that. Our library has also built materials to help teach students how to use AI responsibly and coursework, which, you know, talks about what we mentioned before is how to cite things properly when you’re incorporating it in. And the dean of the college had asked faculty to include a statement about how they use AI in their syllabuses and so that students understand when faculty wanna use it, when they don’t wanna use it, things along those lines.

Joe Gottlieb:

Interesting. So then how do you, so if we start to think about policy enforcement, I know you’ve, you’ve mentioned you have policies Mm-hmm, <affirmative>. And how do you see that shifting over time as more courses incorporate this intentional use of ai? Is that, what’s the horizon look like there? Is it too early to tell?

Patty Patria:

I think I see it in two different areas. I think we’re gonna see more ethical guidelines and codes of conduct on how to use AI for stances. If we have students who might be handling sensitive employer data, if they’re doing an internship, we’re gonna have to put guidelines in front of those students as will the employers. ’cause They’re gonna be concerned about privacy, security access controls. So I think we’ll, we’ll continue to see that. I also think that we’re gonna see a plethora of AI copyright lawsuits popping up. There’s quite a few that have already popped up. And we’ll see more people using some trusted systems where they have indemnification against those lawsuits around copyright copyright’s. Really interesting. We’re, we’re starting to do a little more in that area. We’re trying to build our own AI math tutor bot and understanding what you can and cannot use I think is, is going to be really important as well as, as this technology evolves.

Joe Gottlieb:

You mentioned bots, so what have you yet used a bot capability that is fronting a large amount of sort of reference material about Babson, maybe, you know, administrative details or things to find out is that active use, an active use case at Babson?

Patty Patria:

So we’ve actually had one faculty member build a bot in a class with his students and he was using his own proprietary data, so there was no issues there. And we are actively engaged in a project with Microsoft right now to build a math AI tutor bot for our students. And again, we’re working through all the copyright compliance to ingest materials and things like that so that the bot has the data it needs to support students down the road.

Joe Gottlieb:

Gotcha. Okay. Lots of exciting things going on. So if we look towards the future, how are you leveraging what you’ve learned from these couple of iterations of AI deployment and your stakeholder collaboration to drive the future of AI at Babson? What’s possible?

Patty Patria:

Oh, lots of possible things. We are very fortunate at Babson to have a collaborative and engaging environment where faculty and staff partner to together to achieve some great outcomes. It has worked jointly with the faculty and the generator to build this comprehensive strategy for Babson. And it addresses the, the faculty are addressing sort of the academic components of that. And I’m working to address both the student and the administrative components. We’re planning to infuse AI into our operations, AI access for student experimentation, and of course the continued great work that the faculty are doing using AI into the curriculum with faculty on our generator to support the strategy and to mitigate risks. We’ve also created generative AI policy on how to use it, but we’re also looking to spin up a, a center of excellence or best practice area. So we literally just started this a few weeks ago. We’re in the discovery phase right now. But again, we wanna be able to provide the proper resources to our faculty, staff and students to make sure that they’re start to build stuff. They’re doing it ethically, they’re doing it the right way, we’re going through the right processes. So we’ll there’ll be more of that coming as well.

Joe Gottlieb:

Very exciting. Alright, let’s bring this to a close. How would you summarize three takeaways to offer our listeners on this topic of developing a comprehensive strategy for ai?

Patty Patria:

So first and foremost, experimentation is important. This technology is so new that you really need to try out different options, see what works, and then to continue to improve, you have to learn how to fail fast when it doesn’t work and move on and come up with a different solution. The second is that AI has great potential but it also has great risk. You need to educate employees on the difference between open and closed systems. Ensure that confidential proprietary data does not end up in open systems and really be aware of copyright. And last but not least, in higher education, partnership is key. AI will affect administrative operations, academic teaching and the way our students learn and work. And IT faculty and administrative leadership really need to partner together to help our students learn these new technologies so that they have all the skills they need to succeed.

Joe Gottlieb:

That’s a great summary. Patty. Thank you so much for joining me today.

Patty Patria:

Thanks for having me. It was a pleasure.

Joe Gottlieb:

Alright. And thanks to our guests for joining us as well. I hope you have a great day and we look forward to hosting you again on the next episode of TRANSFORMED. Yo, stop the music. Hey, listeners have transformed. I hope you enjoyed that episode and whether you did or not, I hope that it made you stop and think about the role that you are playing in your organization’s ability to change in the digital era. And if it made you stop and think, perhaps you would be willing to share your thoughts, suggestions, alternative perspectives, or even criticisms related to this or any other episode, I would love to hear from you. So send me an email at info@Higher.Digital or Joe@Higher.Digital and if you have friends or colleagues that you think might enjoy it, please share our podcast with them as you and they can easily find TRANSFORMED is available wherever you get your podcasts.

 


Back To Top