Skip to content

From pixels to profits: How AI is painting a new picture for retail – a FLOW Summit replay

Written with Gemini Pro.
Reviewed, revised and approved by Signifyd humans.

From simple analysis to creative generation: We’ve come a long way from the early days of AI struggling to identify basic shapes. The 2012 breakthrough, marked by Google’s feat of recognizing cats in YouTube videos, ushered in a new era of featureless AI learning. This paved the way for powerful generative models like GPT-3 and ChatGPT, capable of not just analyzing data but generating human-quality content, including text and images.

Revolutionizing retail with practical applications: Imagine AI assistants simplifying data analysis for business users, like Zenlytic does with conversational BI tools. This is just the tip of the iceberg when it comes to practical applications. Generative AI can:

  • Craft personalized marketing messages and product descriptions: Say goodbye to generic copywriting! AI can tailor content to individual customers, boosting engagement and conversion rates.
  • Automate image editing: Photoshopped product visuals? Not anymore. AI can modify images on the fly, saving time and resources while ensuring brand consistency.
  • Streamline email marketing: Forget about manually piecing together emails. AI can generate personalized emails based on customer data and purchase history, adding a human touch without the human effort.
Highlights from FLOW Summit 2023

This FLOW Summit 2023 replay delves into the fascinating world of generative AI, exploring its impact on the future of commerce. Moderated by Katherine Bahamonde Monasebian, the session features insights from Srinath Sridhar and Scott Friend, who paint a captivating picture of AI’s transformative potential.

Beyond automation: The future of work: As AI takes over routine tasks, workforce dynamics are bound to shift. While some jobs may become obsolete, new opportunities will emerge, requiring different skill sets.

Navigating the ethical landscape: With great power comes great responsibility. The ethical considerations of AI are crucial, and the session addressed them head-on. Concerns over bias, transparency and potential job displacement are discussed, emphasizing the need for responsible development and implementation. Scott Friend’s analogy of using AI in the middle, with human oversight ensuring brand voice and ethical consistency, resonates strongly.

The takeaway: The generative AI revolution is upon us, and it’s transforming the way we shop and sell. Embracing this technology thoughtfully, with a focus on human-AI collaboration and ethical considerations, will be key to unlocking its immense potential for a better retail future.

Shape your ecommerce future at FLOW Summit 2024

Join industry leaders in New York City on April 17, for Signifyd FLOW Summit, an immersive experience designed to equip you with the resilience needed to thrive in the evolving retail landscape.

Dive deep into

  • Actionable strategies to navigate, change, combat fraud and build lasting customer loyalty
  • Engaging sessions led by renowned brands like Abercrombie & Fitch and Forrester
  • Dedicated tracks for both ecommerce and fraud & risk professionals

Network with 350+ peers and forge valuable connections that will propel your business forward. Explore the agenda for a full breakdown of topics and speakers.

Don’t miss this exclusive opportunity to shape your ecommerce future. Register now!

TRANSCRIPT

Introduction to AI evolution

Katherine Bahamonde Monasebian (00:12):

We know that there’s no hotter topic right now. You can’t really escape the headlines, and there’s a lot of bearing perspectives ranging from how do you have hacks to your productivity or make money all the way to this is an existential threat to humanity. Allah, Elon Musk’s, threat to civilization. And then you’ve got skeptics in the middle that are saying, this is another metaverse, another crypto. It’s a little bit over hype, no fear mongering. Everybody calm down. It’s going to be okay. And so our goal today is to cut through some of that noise and try to find the signal. And so it’s kind of hard to know where to start, right? There’s just this been this explosion. So I thought we could start off three by just level setting a little bit. So this feels like it came out of nowhere, but did it before we talk about where we’re going, how did we get to where we are today?

Srinath Sridhar (01:16):

Yeah, great question, and this is a fun question actually. I’m an academic by background, so it’s been a lot of fun to see this grow. If you step back, I would say ai, especially the neural nets part of AI probably started in the seventies, early eighties, but it was pretty dormant from there until 2012. So it was a huge period, like 20, 30 years where AI existed. Neural nets existed, but not too much happened. But in that period, we sort of understood that there were some canonical problems that AI solved. And two of the problems that we traditionally associate with AI would be image recognition and speech recognition. So those were canonical problems. We sort of understood them, but really not too much progress happened. And funnily enough, 2012 was when we had a pretty big breakthrough and that’s when Google came out and basically said it could identify all the cats in the YouTube videos.

(02:23):

And so it’s funny that that actually had a utility and it was pretty big because until then, for 20, 30 years in between, we were doing a lot of hacks, a lot of code that we had to write to do processing of images. We had to do a lot of what we call feature extraction. So all of AI and ML involved a lot of feature generation even before you could actually put it into the machines. When Google came out in 2012, it basically got rid of all of that, literally just said, here are a bunch of videos. I have a bunch of humans labeled the videos with cats without the cats throw in lots of compute resources, built a large model and then it could just identify cats. So that’s one theme we will see after 2012, which is no features nothing, just let the AI do everything it wants to do.

Historical context of AI

(03:18):

After that 2014, they saw this and then they acquired DeepMind, which was a company based out of London building large models that they were doing models to play video games, any video game basically. And they pointed them to play go very well. People don’t think about it this way, but go was the first time when Google came out with Alpha Go was a generative AI model because until then in all the games we were trying to find the best moves at any position and that didn’t work for Go because the number of moves in Go was very large. So instead what Google did and the breakthrough there was to say, what’s a human-like move at any position. And so that was this generative AI moment where you could actually take a go position and ask what’s a move that a human would make? Not what’s the best move in this position, but what’s a move that a human would make?

(04:16):

It might not be the best move, but that’s okay because once you reduce the state space to all candidate human moves, then you could have machines analyze what is the best among ’em. And then OpenAI got set up because OpenAI thought Google was running away with all of this. That was 2015 or so. It was a nonprofit and Google in between came out with Bert. So they applied all the AlphaGo strategies to generate text from these large models. OpenAI took, Bert came out with G PT two that was also open source, and this is 2018. And then by 2020 they came out with G PT three, at which point they understood that they were ahead of Google in terms of technology. At that point, they did not open source GPD three unsurprisingly made it a for-profit. That’s when we started the company and it was clear that this movement was happening. Then everybody knows 2021 or 2022 is when we had chat GPT in 2020 threes G, PT four. That’s the history.

Katherine Bahamonde Monasebian (05:28):

Wonderful. So that’s a great history lesson, how we got to this explosion that we’re in right now. Scott, where do we go from here? What does the evolution look like at this juncture, five to 10 years? Not theoretically, but it’s implications to commerce, to retail. What’s your perspective?

Scott Friend (05:47):

I mean, I guess the big question is, is there a reason why now is different? 25 years ago when we started building my former company Profit Logic, we were using big data. It was terabytes of retail, POS data, big computers, the biggest we could find, and a bunch of smart math and physics PhDs to optimize stuff that was AI in that moment. And as Sri just talked through, there’s been incremental innovations and some big step function innovations over and over and over again over that period of time. And the world has improved and changed incrementally. It hasn’t changed dramatically. So what’s different now? Are we going to change incrementally again and again and again over the next five or 10 years? Or is there going to be a sea change? And I for one A, I’m not an expert, just one person’s opinion. I think we are at the cusp of a major sea change in how we work as a result of these current innovations.

(06:46):

Why is that? What’s different now versus the innovations of the last 20 years? Thing number one, these new models, the one Shri talked about like chat, GPT, these are general purpose in nature versus historically very special purpose models. And general purpose means there’s lots and lots and lots of applications that can be built on top of them. That’s new news number one, new news number two, these models are able to generate stuff, not just analyze stuff, hence the name generative ai. And they can generate stuff that is truly indistinguishable from human created content, whether that’s language or images, that’s totally new news. And I think thing number three that really influences my opinion on the level of adoption is the fact that these models are designed to be interacted with in a super approachable way, right? Natural language and image interaction to the models and response from the models, which is like a dramatic change in the interaction between potential interaction between humans and computers. So you add all that up and I think it’s easy to make the argument that the kind of work, many types of work that get done in all of our operations are going to be influenced dramatically by AI over the next five years.

Srinath Sridhar (08:05):

One quick addition is there’s a documentary on YouTube called AlphaGo. It’s a great documentary, it’s free, so if you really wanted to catch up or watch some of this thing, it’s actually beautiful because of the human elements as much as the AI elements. So definitely encourage people to watch the documentary. It’s a lot of fun.

Scott Friend (08:28):

Or you could have the AI watch it for you, summarize it, clip it.

Katherine Bahamonde Monasebian (08:35):

So this is going to be a profound shift that’s going to fundamentally change how we do business. That’s the perspective. Let’s break it down a little bit more tangibly because it’s hard to actually understand from BI all the way through consumer experience. How is this affecting us commercially? And Scott, why don’t you start from your portfolio of companies, where do you see the practical applications? Not theoretically, but in the immediate term.

Scott Friend (09:05):

Yeah, I’ll give a couple examples. And these are all relatively nascent for obvious reasons, but just a few that I think probably are relatable to everyone in this room. So one, most people here raise your hand if you’ve used a BI tool of some type to analyze data. Great. Historically, you’ve got to ask questions of the BI tool, often using SQL or maybe through a programmer in your shop or potentially by pointing and clicking and kind of diving down into answers deeper and deeper. There’s a new generation of bi, which is arguably no more powerful on the inside, but the interface to it is purely conversational. We’ve got a company in our portfolio called Lytic that has built this conversational interface on top of BI where you can simply say, why are sales lower in the northeast this month? What’s going on? Which channels aren’t working?

(10:02):

Why aren’t they working? And get the answers back in exactly the form. If you had asked someone in your shop to go dig into that data. And so what does that mean? That particular example means there’s going to be less need for data analysts to do work that could be done by a business user right out of the gate. I think we’re going to see much more of that. Number two, anyone in this room in their businesses use tools like Attentive to send SMS messages or emails? Everyone should be by the way. So it’s an SMS marketing tool. One of the inhibitors to sending more SMSs or more emails is just content, right? Like what do I want to send? I hit a button and it goes out to hundreds of thousands of users, it’s got to be good. What if the machine could generate lots and lots of examples of really good content for you that you could choose from?

Evolution of AI: 2012 to present

(10:55):

If you’re in a attentive situation, you’ve got 8,000 customers, all brands and retailers, billions literally of messages that you’ve sent and measured the impact of historically. So you have this unfair advantage of a data asset that you can use with generative AI to create terrific messages and give the user a starting point. The user no longer has to write all that copy themselves. Another very, very typical example, and the last I’ll give is in the world of images, anyone here who’s in the commerce world still has to have photos of their products. And typically that requires a photo studio, whether that’s an online studio in the cloud like Sunna or you hire a photographer and have your own studio. But adjusting those image product images with a different background, a different setting, a hand model, a cat, all that stuff requires an incremental shoot. In the world of generative ai, you can simply say, add a cat, change the background so it looks like my shampoo bottle is on a beach in Florida and not in someone’s bathroom. All that stuff can be done automatically, which saves time and money. So I think these are simple, practical examples, and there are lots of businesses, some businesses like the players in this room, some vendors who have the training data in-House to make these models really powerful really quickly.

Katherine Bahamonde Monasebian (12:20):

MRNA, would you agree with those kind of rough buckets of segmentation? There’s large volumes of structured and unstructured data that is hard for a human to process, then there’s content, then there’s copy. We’ve heard a lot about the death of the copywriter and such. Are those roughly where we see the biggest commercial opportunities in the shorter term, what Scott just mentioned he’s seeing?

Srinath Sridhar (12:46):

Yeah, I can add some of those dimensions, especially on the structured and unstructured side. Also, the other part that it reminds me is when Raj was presenting stuff, he basically said, look, there’s a lot of interesting stuff that’s happening with private dataset because like Scott mentioned, all of these pre-trained models is pretty much everybody gets the same model. I mean, there are very few foundational models today, and it’s all the same. So what’s going to be different is a lot of private data, which Rajesh already mentioned. And so training or fine tuning on this private data is going to be pretty important. But for structured and unstructured data, it’s actually a pretty good distinction because if you look back at what Scott said in terms of what has changed over time, one was this fact that you can generate stuff. One was the fact that it’s pre-trained, so it’s not a specialized task, but actually one other change is if you go back until 2010, all the action was on structured data.

(13:50):

So for example, you would do things like demand prediction or Scott worked on pricing, and these are all very structured data problems, fraud. I have a customer, they bought a product, this was the price of the product, very structured data, and all the analysis that you’re doing is on structured data. But if you look at the action that’s happening now, it’s all on unstructured data. Here is a copy, I want to write this copy here is an image, let’s change that image. There’s no structure to it. It’s all unstructured data. So that’s firstly a pretty big distinction. Having said that, the space is moving really fast. So Scott also mentioned BI tools that’s now starting to combine structured and unstructured data. And I think what’s going to happen in the future is the same advances that we have had in unstructured data, we can actually bring it back into structured data.

(14:44):

So I’ll give you an example. Let’s say that somebody has a shopping cart and they have a bunch of items in the shopping cart and you want to send an email. The way you would do it today, even with all the advances, is to get all the structured data from a whole bunch of databases, feed that into one of the models that you have a foundational generative model, which will then write the email going forward. I don’t even think you need to do that because what you can actually end up doing is to say, here are all my integrations. Here’s my integrations to signify, here’s my integration to my warehouse management system. Here’s my integration to auto management system. Here’s my integration to ERP systems. I’m not even going to tell you what the schema is for any of the stables. Write that email and the technical advances, we have to redo some of the stuff that GBD three came out with, but there’s no technical reason why that can’t be done. So I think there’s lots to be done on the structured data going forward.

Katherine Bahamonde Monasebian (15:39):

So just an extension of this question, I’m a SQL coder and I develop the reporting for my company, or I’m a digital marketer and my expertise, my tea is SEO, or I’m a creative director and I’m paid for my ideas. In concepting campaigns, how do these use cases affect those jobs?

Scott Friend (16:04):

There will be less of them. Fundamental fact, look, Goldman Sachs just issued a really interesting report that looked at the content of various types of work across the economy, and I won’t bore you with all the details, but sort of the net of it is you end up with about 25% of current work that can be replaced by these tools. So is that bad for people? Well, if you look historically at major technical innovations over the decades, every time there’s been major disruption to types of work because of automation, it’s been replaced very quickly with different types of higher value work that are better jobs. And I think we’ll see that in this case, but it’s hard to argue that there won’t be major disruption to many of the sort of more clerical tasks associated with the work that we all do. And for many of us, it’ll make our lives better and our jobs more productive because get to focus on the stuff like making decisions and using our pattern recognition and wisdom to make judgments versus cranking out yet another tagline for an SMS, but for some people it will eliminate the need for their jobs.

Srinath Sridhar (17:20):

I have a slightly different perspective on this, but I might just be in denial, but I’ll give you

Scott Friend (17:28):

One. PhDs, by the way, gone.

Current state and near future of AI

Srinath Sridhar (17:33):

Well on that topic, this is the first time when actually software engineering is on the line because usually people think that these are higher paying jobs and they can talk about taking all the entry level jobs, but generative AI is writing far better code than I would ever write. So my job is on the line. So there is no hypocrisy this time around. There’s no escaping this. But so I have a slightly different perspective on the job loss side. I’ll give you one example that’s close to my heart, which is there’s this company called Outreach, which basically orchestrated a lot of the emails that people receive today, B2B emails on whether you want to buy this product, here’s our service called outbound emails, and it would do the follow-ups as well. So it would set it ahead of time. It would send you a series of eight emails, whether or not you like it until you reply.

(18:31):

And the immediate thought was it would decimate the SDR jobs. And because I would come in and send an email and follow up on all the emails and it would be the first job to go, it was probably the biggest catalyst to increase the number of SDR jobs by far. I don’t think that’s arguable even. You can ask why that is the case. And so here’s one theory of why that’s the case. When outreach came out with it, it made absolutely the part of follow ups automated. So nobody is going to do manual follow ups anymore, but in that process, it gave management visibility into the jobs that SDRs were doing. And so it was definitely a cost reduction play at first, but when the cost reduction play played out, management could see the ROI and they actually ended up adding more to the same to increase top line. And so the net of it is that it reduced absolutely the work that they were doing on a daily basis changed the work that they were doing on a daily basis, improved transparency and actually ended up increasing top line significantly to the point where SDR jobs today is a lot because of outreach. So I think there’s be some interactions like this that might be counterintuitive.

Katherine Bahamonde Monasebian (19:50):

Now, one provocation that I think I’ve just found interesting is that while on a macroeconomic level, things do shift and roles change and are elevated for any one individual. If I have what was once a differentiated skillset that’s now been automated or commoditized, I myself lose while over time there is the transitional shift that we’re not looking at it from a human perspective that there will be some pain in the short run. So following up on that, so if I am a person in a company and I’m trying to figure out where do I play in this explosion, do I build, do I buy, do I fast follow, do I lead? How do I integrate it with my overall technology roadmap? What do I do? Do either one of you want to take that in terms of how you think about commercializing it?

Scott Friend (20:49):

Yeah, I mean, I would just say at a high level, given the potential implications of this sort of new set of platforms, it seems crazy for anyone in business of any type not to be experimenting a little bit just to learn. It doesn’t mean that anything has to be put in production or in use in your job in a meaningful way, but it seems crazy not to be on the front end of learning what the implications could be. So I’d say that’s step one, step two, as the few of the examples that SRI gave and I gave earlier, there are lots of vendors out there that have this sort of unfair data advantage. They work across thousands of your peers or hundreds or dozens, but many not just in one company, and they have data collected across all of those companies, and it gives them an ability to leverage that data and train these models and do unique things. And so I would say take advantage of that phenomenon, partner with those vendors, learn from them, do some pilot projects because you’re going to get the advantage not just of their expertise in this emerging arena, but also the collective wisdom, all the data that they’ve been able to aggregate and use to train the models.

Katherine Bahamonde Monasebian (22:02):

Great. Ria, anything to add?

Srinath Sridhar (22:05):

Yeah, so what we are seeing people do today is everybody’s going to chat. GPD trying something, everybody’s going to dally generating a bunch of images, and of all the people, it drives the marketers nuts because there’s just no consistency. Everybody’s creating a, one person is creating a picture of a shampoo in Hawaii, another person is creating a picture of a shampoo in an upscale bathroom of some kind. Then it’s just all over the place. So what we try to do is, at least from what we have seen, is to put the AI in the middle in a process. So for example, you might want, let’s say you’re writing copies for product descriptions, could be for SEO reasons first align within the team of some committee, like a content committee with marketers on what kind of output do you want to get out of this thing?

(22:58):

What is your brand wise? What’s the kind of content, what’s the length? What are the topics you want to cover? Put that all into the chat GPT like prompts. So align on that first use chat GPT or whatever your generative AI strategy is to drive the scale and still have people at the end of it to add their own voice or to double check the work. So that’s sort of our recommendation. So don’t put AI upfront without any human intervention because outcomes content that might not reflect your values also don’t hit send without using humans. I mean, don’t just change the process and let AI run crazy. So having AI in the middle and streamlining efficiency is probably the cleanest way to implement the process.

Katherine Bahamonde Monasebian (23:49):

So what I hear, if I can play it back to you is everyone needs to be experimenting and leading in this space perhaps through a partnership strategy if that’s not your knitting your core business and that you should start from kind of business back, not AI for AI’s sake, but what outcomes are we trying to drive and how do we leverage this emerging technology to drive our commercial outcomes?

Scott Friend (24:14):

And I think that this idea that making sure the output is consistent with and sort of authentic and representative of your brand voice is in the world of marketing language anyways, is a really important topic. And the example of doing that poorly, I think about all the time is one of my kid’s friends, I have a kid who’s a freshman in high school and early after the launch of chat, GPT, one of his friends got caught for using it to write a paper. Well, he said it was one of his friends, but I pretty, and I’m like, how did he get caught? Because the paper was perfect. It was like it crushed it. The reason he got caught because the professor is like, there’s no way this kid wrote this, right? It was not consistent with the brand voice of this particular student. And so I think there is an element of oversight. Eventually the AI should be good enough to capture that really, really well. But over the near term, there’s an element of oversight that’s really important.

Commercial applications and examples

Katherine Bahamonde Monasebian (25:12):

So we’ll close with the burning question of ethics. So my point of view is I don’t know how to catch my own food or grow it. I don’t know how to read a map or I can’t remember phone numbers. There’s a lot that’s happened in the evolution of what it means to be human, but is this different and how do we think about the ethical considerations of what this means? I mean, there have been, for those of you who watched the Elon Musk interview, like this could be the end of human civilization, there’s been that extreme of sentiment around this technology. Can both of you just in closing address the ethical considerations of this explosion in this space?

Srinath Sridhar (26:03):

So firstly on the Elon thing in the six month moratorium and so on, I personally don’t think it makes too much sense. Firstly, there’s a difference between generative AI as we have it and sort of general AI that grows consciousness and goes out and does terminator style assassinations or mass killings or whatever taking over the world. I don’t think that’s going to happen anytime soon. There’s also no containment issue here, to be honest. There’s just fundamentally no way that AI can escape, let’s say open AI and then take over the world. It’s not possible physically or virtually as opposed to a virus as an example. That’s possible. You’ve got to be careful. There’s just no way this can happen at all. So I’m not worried about any of that stuff. It’ll banned Chad GPT. I think that’s perfectly justified. They want more diligence on GDPR perfectly fine.

(27:06):

Just like a company asking to do due diligence on InfoSec, I think that’s a totally fine thing to do. I think Google’s old motto of Do no evil makes sort of sense here, which is, look, as long as you’re putting AI in the middle, like I was talking about before, I think it’s fine if you’re using it as Gmail auto complete on steroids. I think that’s okay. I think that’s all good. I’m sure we’ll have more regulations coming in both on the, I think the regulations on the product side makes sense too. But we have seen that before. To be honest, this is not new, but I’m sure FCC will come out with stuff. FDA will come out with stuff on the product side regulation and that’s totally fine too. Although I think a lot of these have, we have seen them before, even though it’s not because of generative AI technologies on the end products. Anyway, those are my thoughts.

Katherine Bahamonde Monasebian (27:59):

Got any closing thoughts on this arms race?

Scott Friend (28:01):

I just have two quick thoughts. Sri is way more of an expert on this stuff. One is I think there’s a huge gap between something seeming like it’s thinking and something actually thinking for itself. And I have no doubt we will see the evolution in these models where the output makes it seem like stuff is thinking because it’s so good and it incorporates so much of the knowledge in the universe in creating its response. But that is different than thinking on your own, which is the world of general ai, general artificial intelligence that I think Elon and others are worried about. And it’s not clear to me that anyone has seen a path to connect the dots between what’s being built now and that outcome. So thing number one, thing number two, every major advance in technology has implications that have a, where I think we each as professionals have a responsibility to workers in our organizations. From an ethical standpoint, it was true with warehouse automation, right? We need less workers in warehouses because we have robotics and automation and their implications of that. Hopefully those implications are that we create better higher paying jobs that are a living wage for more people. And I think we can each take it upon ourselves to make sure that becomes true on the back of this innovation.

Considerations for businesses and ethical aspects

Katherine Bahamonde Monasebian (29:20):

Wonderful. So we don’t have time for questions, but I encourage anyone who has any follow-up to reach out to our wonderful panelists and thank you for your time.

Kevin Boyd

Kevin Boyd

Kevin Boyd is the web development manager at Signifyd. When not leading his team in crafting captivating digital experiences, he experiments with prompt engineering using ChatGPT and other generative AI systems, as well as writing and optimization.