User Experience strategies to make products to stand out and be successful

SPEAKERS
Ken Lonyai, Dave Erickson, Botond Seres

Dave Erickson 00:04
In today's market, it is critical that your products not just be good, but they need to be great in order to stand out in a crowded field and be successful. That success will most likely be directly related to a well thought out and executed user experience strategy and design. We are going to talk about user experience strategies that will make your product stand out and be successful on this ScreamingBox Podcast. Please like our podcast and subscribe to our channel to get notified when next month's podcast is released.

Dave Erickson 01:02
Standing out in the crowd can make you millions developing a new idea. How can you ensure successful product differentiation? User experience also known as UX is probably not what you think it is. Welcome to the ScreamingBox technology and business rundown podcast. In this podcast. My co-host Biotond Seres and I, Dave Erickson, are going to chat about UX and product success with Ken Lonyai UI, UX award winner and UX consultant. Ken is a true product geek, part innovator, part product manager and part UX designer. He is an entrepreneur and UX award winner and coined the acronym HUI for human user interface, which are technologies that make digital UI more human-like. These days, Ken's focus is primarily on user facing AI such as intelligent assistants, chatbots, conversational intelligence, and their underlying technologies. Today, we're going to chat about how to make products stand out and be successful by focusing on what makes your product great, which is your customers' experience with it. As a bonus, we're also going to chat about integrating ChatGPT and other chatbots into the UX process and product designs, which we feel is going to be the key to building successful digital products going forward. So Ken, did I miss anything? Or is there anything you want to add to the intro?

Ken Lonyai 02:31
No, I think you gotta cover it. Thank you.

Botond Seres 02:35
All right. So Ken, have you been an expert in UX? We were wondering, what are some of the key considerations for defining a really solid and effective UX strategy for digital product development?

Ken Lonyai 02:51
I look at it very simply as the very first thing is approach or mindset. So it's very easy to come up with product ideas, and you think to yourself, oh, this is great. Let's go build this. But you don't really know for sure if the user sees things from your perspective. So it's the very simple thing that UX is based on is user centered design. What does the user really value, benefit from more than the, the old saying, I have a hammer, everything I see is a nail kind of thing, well, maybe they see a screw, maybe they see a staple, or whatever it is. So I can't necessarily use my thinking, my tool. So it's the whole user centered design approach, which starts with some research of the users, the market, competitors, products that you have, and where they do or do not fit. So a very simple approach is the best starting point.

Dave Erickson 03:48
We feel that UX is really critical to the success of a product and that UX process, when I start talking about it to potential clients and customers, they sometimes ask why is it necessary? So maybe we can kind of talk a little bit, what are the higher level basic steps of developing a really quality UX that allows you to give something to the developers so they can start developing? And in that summary of, what is UX are the steps of UX. Maybe we can talk a little bit about what differentiates or what makes a good UX process versus one that is not complete or not very good.

Ken Lonyai 04:40
Yeah. So before we jump into that, I'll point out, I totally agree with you. I have this thing called the UX profitability continuum, which is a theorem I put together that, it can be found on my website. There's a little graphic there, shows that UX is what drives profits. So when you talk about the why, from the business perspective, just about every business I know is in business to make money. And this is truly the way you're gonna do it. But if you talk to accounting, they're gonna say, No, we're the ones that make money and salespeople. And, of course, everyone works together. But the thing people buy, is user experience or the experience, they get the thing they want, it's a personal item, the thing they brag to their friends about, about a product or service. It's the experience, when it's for business, it's the value of that user experience. So that's the why part, the how, there's a lot of steps and is done differently. The first thing, again, is to start with knowing who your customer is that you're targeting. So you can do general research on that, then start talking to those customers or potential customers, so interviews, the simplest thing, asking the right questions, that's, there's probably 1000s of articles on how to do that. I think it's a little bit more art than science. So I think it's a learned thing. So is asking open ended questions so that they are doing more talking than, than you are just to kind of get that feedback, challenge some of the things they say. So people will say, Oh, I need this. But in reality, exaggerating here, they need to, you really have to flesh that out, because if you don't do the interviews properly, or get the feedback properly, you're going to build something and it's still going to miss the mark. From there, I would develop a persona around the people, possibly even the persona first and vet that out so it could go either way on that, depends on where you are in the market. So now we have some feedback, we are defining who the feedback is from. Start to look at, again, as I was saying, what's in the marketplace? What do they have? Now that should be part of the questioning? What are you using? Now? Are you using our product? Or what do you like? What don't you like, what, what's the dream feature we don't have. So it's trying to elicit those kinds of things. Or if this is a completely new idea, it's something that never existed, will it really help them, trying to elicit that is there really a market for this? Again, I sort of come up with a couple ideas every day and most of it is not worth pursuing for any number of these reasons. If you really feel it's vetted, then it's starting to get into some wireframing some basic design of what this might be, could be low fidelity wireframing, now I just got rid of my Sharpies, but Sharpie on paper is enough to kind of get the ideas through a sketch, pencil and paper, don't have to use all these design tools. So low fidelity sketch, if you have good designers and start vetting this through it goes into the real design back to your, your users or your prospective users to see if this feels like it's the right thing and then you can go into more of the development process. I don't want to take up too much time on that, but to me, it's a very linear process and just covering the, these bases, I'm sure there's all kinds of acronyms, techniques. I try and keep it simple. But the one thing I'll say is once you get to prototyping, you also want to get feedback. So if you have trusted potential users, that if you have something proprietary, they'll NDA and keep it quiet. And different types of testing. I do things like card sorting, tree testing, which I won't explain what they are, but those are things I use. The other thing, just coming back to the interview part, I guess I left out,I try and do contextual observation, which is just being in the context of the user, if it's something they, they have, competitor software, our software or hardware, basically looking over their shoulder just observing and asking questions. Well, why did you do that? Why do you prefer that method or the, it’s part of the research process.

Dave Erickson 09:03
It seems like, depending, you know, if, if you're doing a greenfield product development, where somebody said, Hey, I woke up, here's my product idea, let's make it and they have nothing else, right? That's one kind of UX process. Another type of UX process is, they currently have a product, it's not functioning very well. They want to make a new one. It seems to me, that second one where they already kind of have something or they're using something from a UX perspective is actually in a sense easier for this interview process. Because then, you know, who you know, how do you like what you're currently using? What don't you like about it? But you know, what if you could do anything. What would you wish happens? Right? So that's easier to work with in that sense, I think, the green field that's a really different ball of wax and it seems like that would be Much harder to start a UX process in. Is that correct? Am I looking at that correctly?

Ken Lonyai 10:06
I don't want to say correct or incorrect. I would just say they are not as dissimilar as that. Because yeah, it's harder if you could show them the thing and say, so what do you think of this? How do you use this? is definitely easier than saying, imagine something, because people don't imagine the same. So, again, if it's not so proprietary, you can share some sketches with things. Yeah, it's definitely harder. I think in the end, it's the same process. But if you have this, this is a blue pen, you want to switch to red, it's gonna be a lot easier to say, so you have a use for red ink and they're gonna relate much better. So it's, it's definitely easier, but it's relatively the same process.

Botond Seres 10:49
One of the things I find most fun about UX strategy is a lot of the smoke and mirrors that go into things at times, like some of my favorite examples is the like button on Facebook takes exactly as long to animate itself as long as they have reached request, or that most of the loading bars are 100%, fake, and they just again, through the average time, they’ve still got like 80%. So I'm wondering if you can open their eyes up to some, some more fun examples of smoke and mirrors that maybe we don't even consider or even notice.

Ken Lonyai 11:33
The mean, the idea, that is to pacify the wait time. People hate to wait. So by giving something it's making the person feel that, oh, this thing is responding to me. I have, I'm updating my website, and it has a loader because it's bogged down. It's an animated GIF. It's not saying your 20% 40, 60, 80. At least says something, that the server's not frozen in those moments that it's pulling in a lot of large images or things. But I don't have a good answer for you both, because I don't know, good examples of smoke and mirrors that I can think of. But yeah, they exist. It again, is the pacification if that's a good word to use. But it's a great point you make. Yeah, I can give you an example, try to keep it short, sure. I worked for a very large bank. I was the product manager on their AI team and what I ultimately was responsible for is, the chatbot would be the first initiation of conversation or dialogue. At some point, it can't do everything, we pass it to a human agent and the gap between those two is, is the cue where someone gets transferred to. And the cue was being managed through third party software that the third party provided. And it was missing some key things, one of which was like the wait time indicator, queue messaging. So if you got, chatbot says ah, I can't help you with that. It'll transfer you to an agent. Okay, that sounds reasonable, is that a minute away? So 20 minutes away, if it’s 4:59 In the afternoon, and agents go home at five, the way that queue was by default, it popped in a queue. And you'd be waiting till 9am If you stayed there so, so addressing those things, psychologically, is absolutely imperative. So that message is telling someone yes, I can transfer you to the queue, but agents won't be available till tomorrow morning. Would you like to leave a callback number or text or whatever. So those are typical pieces where UX falls apart. Let's say there's an easy side of the easy planning, like Yeah, make the button this big, make it visible, it’s all the edge cases, that is the destroyer of UX. And that's where the frustration comes in. And the other thing I'll say, so the goal of UX is to be delightful. Frustration is the opposite of the most use word people talk about in the UX field is delight. Being popped in a queue, and maybe it's only a 20 second wait, but in that scenario, the way it was first presented to us, no one would have known that and probably some of the wait 20 seconds but then we'll, do I wait 30 seconds, two minutes? So those little, I don’t know what they call them, loopholes, are on unconsidered aspects that we were handed, had to then address this to solve these problems.

Dave Erickson 14:42
If you had to kind of define what, so what the end result is, what would you define as a good UX? Like what are the characteristics of a product that was done with a great UX?

Ken Lonyai 14:58
It’s that delight thing, people will come away feeling they have value, they have delight. So I don't know a way to clearly measure that. One way is to follow usage if you have any type of usage metrics, usage tracking built into something, again, I'm mostly thinking in software, this could be service could be hardware. So hardware could be runtime, what's the average runtime is, is it going up, we upgraded the old version versus the new version, we see better runtime, or anything you can embedded software that's going to show those types of things, obviously, customer service or feedback channels. So if you go to any e-commerce site, they all have reviews. The reviews tell you a lot of feedback, especially from people you otherwise wouldn't reach. Customer service channels get a lot of feedback. When I think of developing a product, you do this whole thing. So I'm gonna sound like I'm contradicting myself. I talked about, What is the user need? One of the ways I measure that it's talking to salespeople. So if it's a, an e-commerce site where people on their own, you don't have that, but if it's a b2b, or b2, b2c, there are salespeople that manage these accounts and they're always talking to customers, potential customers, so they're a wealth of knowledge. So I was just thinking of this, know what I was going to talk to you guys, that I think I need to write a little article on. I talked about persona as well, why not a persona for the salesperson that you're dealing with? Bob is a 40 year old salesperson at my company. He's got two kids, a very expensive house, two car payments, he wants to make commissions to have his lifestyle. So my point is, if I give my salespeople my persona, Bob, or whoever, a product they can sell, and they're selling it and they're happy, then it also is another measure that I'm reaching my ultimate user or customer, because if they can't sell it, there's something wrong. So to call it like a back channel to also follow. So, so a lot of different ways to get that feedback. The hardest, hardest part is you were kind of alluding to it, to, to a green product versus an existing product is the green product, there's really no way to do that. But, existing product, we have this extra back channel. So, so it's, it's a whole iteration process. This thing is never complete and built. It's build it, test it, deploy it, feedback and it just keeps going through that loop. It's a spiral forever.

Botond Seres 17:46
The iterative development in itself is one of the most powerful tools that we have in the industry. And I just saw a documentary that one of the tools is the giant, by any measure, their secret weapon, is having one week iterations, dedicating Friday to gathering feedback, and then integrating and implementing the results of that feedback during the next week. And then they do it all over again, over and over and over until it is as good as it can be. And on that note, I was wondering what, what is the typical cycle you see in UX development? Personally, I feel like it could be a bit quicker. In my experience, with them wondering what's…

Ken Lonyai 18:40
Yeah, so I'm also going to put in another little plug for one of my concepts, I call it INO product innovation, product management and UX. In a typical company, these things are siloed. Everyone does have their role. I'm not saying people should do other people's roles, but I think all roles should have the level of understanding of all these aspects. It's not like, UX people go do this, give it back to me, the product manager and I'll deal with it. A really good UX person understands what the product manager’s going to want or trying to achieve, because they have a lot of communication. So INO products is, again, mindset more than a set of specific steps or instructions. But even engineers, POs, PMs, UX researchers, UX designers have, everyone has a level of understanding of the other roles within it. It's able to flow a lot better. So you were asking about the iterative process and how to speed that up. I think first of all, having a better understanding, instead of, I did my part and handed it off and wait for you to get back to me. The idea of you said basically, one week turns, which is probably pretty tough to do unless they're small iterations. But the idea that you definitely need that internal communication to, and a little bit graying the lines and crossing over like, Yeah, I know you're the PM, but here's what I did, because I know the goal you're trying to achieve rather than, like, Here's my research, you figure out what the results mean to you. It's like, Teah, I can throw away this piece of the research because I know that's not where we're trying to take the product, or it's the PM, I'm saying, I need you to get this feedback from me, whether it's UX people, the salespeople, as I said, giving them something to focus on rather than, Just let me know what you hear, I need to know about this, whatever this is, so that I can focus our work and turn it faster. Because if you get too much feedback, I have to sort through that figure and what happens is prioritization. You build a roadmap of what you want to build as a PM and then things come at you to try and change your roadmap. There's bumps in the road, technical bumps, things happen, servers down, the CEO comes in demanding something. So managing the timing is very important. So that's focus. And that's why we've talked about these inner products, not to come up with cool words, but to just make the point that everyone has to have a good understanding and willingness to step over the line a little bit, step out of their lane.

Botond Seres 21:31
I think that's great advice in general, is to become familiar with different actors.

Ken Lonyai 21:37
Yeah. It's easy to say I'm this, but I'm this but I can look.

Botond Seres 21:44
Cool. They don't care how it looks, right? It's a standard.

Dave Erickson 21:50
When you start a UX process, how many people are you going to interview and how, how long or how deep you want to go in the interviews? I mean, if you look at, if you're developing, say, a product, you know, the stakeholders in the product can be, you know, like you said, salespeople and marketing people and the operations people in the back end of the company. And then you have the external people, which are potential customers, or resellers or wholesale. I mean, you could build a matrix of interviews where you can be interviewing for months, or year, right? How do you, how as a UX person, do you kind of say to yourself, or lay out a UX strategy where the interview process gets everything you need, but at the same time, you don't spend the entire time doing interviews?

Ken Lonyai 22:45
Yeah, so it goes on the testing side, too. If you're gonna have people test, how many do you need to test to get feedback? So there's some different schools of thought on that. And I can't believe I'm absolutely drawing a blank on the most, most well known UX people that talk about interviews and testing. On the testing side, usually 50 is your max. And you can set that up through I wouldn't say Max, but reasonable max that you can go to. You can set that up often through testing services, that makes it easy to manage the interview side, as you said. I bet there's tools out there. I, you really need face to face dialogue. So short of using a recording tool, a transcription tool. I don't know if there's a shortcut around that. So yeah, you can get bogged down. I don't have a good rule of thumb to go by there. I think it's, it's partly who you're talking to, how much you feel you're getting out of them. So I think this is this art over science, the heuristics side, but I'm sure you can talk to a lot of people, gonna say, Oh, you absolutely need 11 people or 50 people or it's five people. So I'm still not thinking of, after the podcast, I think of it but the UX firm I'm talking about. They talk about as little as five people and they probably know much more than me, but it's just, five never feels right. To get a sense to me that there's statistics behind that. So again, they know more than me, but…

Dave Erickson 24:25
I agree with you, it’s kind of the art right? It, after you've done a bunch of UX products, you kind of know, or I think you would know, you know, okay, for this type of product, I don't need, I need 15 interviews and for this type of product, maybe I need 100 interviews, right. I think that's kind of where part of the art is a little bit of trying to know what that is. On the other hand, as Botond brought up, you know, if you're going to do so many interviews, it takes six months to do that side of it, that may not fit or be reused, enroll in the business plan or business model. On the other hand, if you do two interviews, you may not get enough data to mean anything. Right?

Ken Lonyai 25:07
Yeah. And that's the hard part. I think, What are you building? How big a change is it if it's an existing thing? So, Long Time Ago was building an artificial assistant, Siri existed, Viv was in development, which is now Samsung Bixby, Alexa was not known, at least publicly at that point. So you could talk to people and we didn't have the level of AI that we have today. The NLU was not there at all. Watson was dominant at that time, this is, I can't think, but talking, no more than eight years ago. So this is not ancient history, but to go out and talk to people. Most people, if you just picked a person, Hey, what do you think if something could do this? Yeah, sure, I'd like that. But no one's thinking of it deep enough, because they had no context of technology at the time. And now we've gone the other way, where the AI is just captivating everything. And I hear the term The AI, like it's this brain that's engulfed the globe. So yes, something like that would be very hard to get, talked to a lot of people and not necessarily get much feedback. And you could go on and on asking, and they'll probably say all positive, wonderful dream things. But what are they really going to do? Because at that time, it was still going to be useful, but limited?

Botond Seres 26:31
I mean, if you don't mind, I would like to circle back to the interviews for just a quick question. Like, if I do one interview, and there are 1000 participants, does it count as one interview in the five interview rule?

Dave Erickson 26:50
Well, like you were like in an auditorium with a crowd.

Botond Seres 26:54
A focus group, or

Ken Lonyai 26:56
Yeah, I am not a focus group fan myself, because Long Time Ago, like college students, hey, you can make some money in a focus group. Oh, it's 50 bucks. Yeah, I'm there. Or where do you want me to answer? Where's my check? So I feel there's like a groupthink that comes on in these focus groups. Maybe. Maybe I'm just some greedy college kid, or, or was. There's that aspect, but people just get together and, Oh, I saw a really good video, about a month or so ago that had nothing to do with focus groups. It was about psychology. And it was a test and some known thing I don't know the name of, and they had a fake Proctor, like five people. The fifth one was the subject they are checking, or testing or whatever the proper psychological word would be. The four were actors. And Alright, what's the question? Answer this question, they would all get it right. Do it a second time, third time, they'd all get it wrong. And then you see this subject and their face, be like a little quizzical. And then they know that's the wrong answer. But they say it anyway. So they're being drawn into this to be part of the crowd. So that's why I don't like focus groups. I think, the average I mean, I'm sorry,

Botond Seres 28:10
Peer pressure is just a force of nature.

Ken Lonyai 28:14
Yeah. So you could talk to a client and maybe have three people from the client together, there's some risk of that, but I would not want 6, 8, 10 people in a room because I think that's going to happen.

Dave Erickson 28:28
When you go through the interview process, the UX process. It's better to do the UX process before any product development is being done, right? Because part of the UX process is getting to understand what the product should be. To that note, when you go through a UX process, what is the deliverable? Like. What are you trying to deliver for the next step of the process? And I have a feeling that varies, but I thought I'd ask you, what do you have in your mind as, what is the deliverable of the UX process?

Ken Lonyai 29:05
It definitely varies. So it could be a strategy, you're developing a UX strategy, it could be the, it could be a definition of the experience, literally, of things. So we're going to have three buttons. I'm not talking about the UI design. There's always this configuration between UX UI. I see UX slash UI. Why are you UI slash UX and I know the UI is a portion of UX. And we can get into CX versus UX. And I'll say, UX is almost CX, but there is a little difference. But practically speaking, we'll call them the same. But so I'm just saying there's three buttons and it gets into sort of the more information architecture side of it, then the definition of the UI type of thing. Again, software, this could be a dishwasher and Three controls or whatever kind of thing. Generally though, no matter what you do, as a UX researcher or designer, the product manager makes the final decision. So there could be some reason like now we can't do the three buttons. And I'm thinking of the dishwasher, we can't do the three button thing because our tooling is set up to do four buttons. It's gonna cost us $800,000 to change the tooling. So that's great, but we're still going with four buttons. So it's still at the recommendation level, in that case, so it could be the strategy, could be the UX recommendation, could be a roadmap, so to speak, or blueprint, could lead to wireframes for the UI designer. Again, low fidelity. Yeah, anything like that. So I don't want to create hierarchies implying someone is more important or higher, but generally how it goes the, the UX researcher presents and they're presenting to a decision maker, typically, the product manager, doesn't mean they have any less value or worth. But that's the structure, how things work. You have to have someone that, that owns the product, has just talked to someone recently, we both agree that the names are backwards, PM and PO that should have been flipped, that the product owner should be the final decision maker. That's not how it is. We're not going to change that now. That, that's, but anyway, the PM is the person that says Yes, we're doing this, no, we're not doing that, we're doing it when. So the best UX people, Johnny Ive, at, at Apple probably had more power than any other UX person can have. But technically, didn't make the final decision. I don't know who the product people were he worked with and if they would override his decision, that's a very unique case. But in general, these are recommendations and they have different levels in different areas that you can go to. And it could be something simple, where it can be handed off. So again, the INO products thing, then its relationship. If, if I'm trusting you, you're trusting me, and you're doing this and now, if you, if you tell me to do it, I know it's good. I'm just gonna do it. So I know, given you guys a lot of gray area here. I just there's other people who are very rigid about things. I don't see it that way.

Dave Erickson 32:26
Yeah, it seems to me that the, and this is something people forget about product owners. Like I've actually, like, had clients that, they didn't have a product owner, they didn't know who the product owner was and the UX person is basically talking to everybody figuring out, you know, what it is that they need to make to have the best product. And then at some point, the product owner needs to say, Yes, this is what we're going to make, right? And then you can hand all that stuff off to developers, or the engineers or whoever, and they can actually go make a product because they know what they have to make. Right. Um, but I think that that process of the product owner, you know, somebody's got to make the decision and that's really where it comes in.

Ken Lonyai 33:16
Yeah, well, really the PM, if you have both it’s the PM making that decision or the PO’s managing the team more. That's why I say the titles are really reversed, but that's what they are. Yeah, it's the PM and it's the one who, who owns it. And the thing I always say, when the product succeeds, the team succeeds. When the product fails, it's the PM, because they are the decision maker. So when, when I've worked as PM, that's the, the responsibility I'm willing to take, and that motivates me, let's get this right. Because yeah, all the fingers are gonna point, Who screwed this up? It was Ken, he's the one who said to do that, Ken, Ken, Ken. So Okay, let's, let's make sure Ken looks good here and think this through, get it right. So yeah, it has to be someone like that. And I've been in situations where leadership above me still says No, do this. Okay, it's gonna fail, but I can’t override them. So again, why, there's these gray areas everywhere and I've never been a place that has, I've been in very large companies, never had a place where there is a engineering manager, Scrum Master, BA, PO, PM, UX lead, UX designer, UX researcher, there's always missing roles. So there's gaps that someone's got to fill in.

Dave Erickson 34:38
What about AI? Right? AI may be an option for filling some of those roles. I mean, some development projects or companies don't have the ability to have a full staff that is needed to do a proper UX. What do you think about AI? And what do you think about AI having a role in UX development?

Ken Lonyai 35:03
So regarding AI, I mean, on the one hand, I love it. It's in my name. On the other hand, I hate the term because it's so broad. So if I told you I went to a great restaurant last night, you say, Oh, what do you eat? Food. That's like AI. It's food. It's not very descriptive. I have to tell you,

Botond Seres 35:21
That's so true. Everybody says, Oh, it's AI. Is it a deep learning model, is a language model, is generative, is it deductive? What?

Ken Lonyai 35:31
Yes. Yeah, so that's why I don't like the term. But there's probably possibilities. I don't know of anything now that's overly helpful. I just put a post out on LinkedIn, I saw this thing, someone used ChatGPT as a cheat on a job interview. So if you're interviewing me, ask me some questions, tell me about up on the right. It's, it's hearing, it's going speech to text, posted out the ChatGPT is coming back. I'm supposed to be talking to you saying whatever it tells me. The best way to blow a job interview is to count on that. So right now, I don't know if there's too much. It can generate questions for you to ask and it can be a starting point. I would definitely review those questions. And last night, I was playing with Bard. I got a message from Google says, Try Bard okay? And I asked Bard a question and couldn't answer. I said, Well, something to the effect of how are you making these decisions? Are you, aren't you just reviewing what humans wrote? Yes, I'm just, I don't know, had better verbiage it should have saved that. I am summarizing what humans wrote and opinions are theirs. So that's the thing to remember here. If you're asking these public models today, it's, it's doing a beautiful job of writing what's out there. So you get sort of generic answers even though you can pin them down and refine them. Refine what, what you're getting back from the prompts you give it, still have to review it. So it could be a great starting point. I think maybe people that are new to that can start that way. If you've done this a lot, you should have some pretty good ideas if you know your product or your company or your product line and the customer you're going after. Again, having that persona defined, what, what it is you want to build, those types of things. You should be able to generate your own questions, same way, you should be able to answer questions on an interview or even if you fake that, you're going to be out of that job pretty quick. So I think there's some possibilities there. Right now, with what's out there. I'm not sure if there's too much more. I saw this, I looked at a few articles that people with, like, there's a thing, write, write, write articles or make videos about ChatGPT, because it gets good views. It was, it was how you can, saw three or four articles on how you can use chat GPT. And I just like to point out that it's just one, LLM, L L M model, it's getting the attention, but there's lots of others and use it to generate the filled texts. If you do design, everyone uses lorem ipsum. So you could tell it, Oh, I need filler text about this and get some detailed response. I happen to think that's a horrible idea. Because if someone reviewing the layout sees Lorem, Ipsum, that's out of their mind. If it sounds like company text, they're gonna waste their time, Oh, where did you get this text? Who approved this? That's wrong, you got to just totally derail the process. So I disagree with some of these ideas that are just being generated to just say, Oh, we have to use this tool; it's still very nascent, early days. So I think they will grow. The so, right now the best I can think of is using it to start questions. Maybe for interviewing, you can also use it. Another thing I guess, would be just summarize results if you want to get a quick summary. Like maybe I'm interviewing you guys, getting feedback, we're recording this. I run it through my text to speech, I get a transcript and they pass those transcripts to one of these language models and say, give me a summary, tell me the, you know, of this. And who knows any x, we've never agreed what that number is, but some number of other interviews. Tell me the most like things, the most needed things. So yeah, it could be great to get that. Once I got that summary, I'd review it and say Oh, they, they hate our thing. I'd want to then go back and hear that firsthand, not just trust my LLM summary, but it could save a lot of time getting me to a point at least thinking like, it sounds like they hate this, and they want this. So those types of things right now, I think could be useful.

Botond Seres 40:00
That's, that's fair. Personally, if you like the, the most resource and time intensive part of using an LLM’s is actually setting them up to produce the results that you're looking for. Especially in the format's you're looking for

Ken Lonyai 40:16
Yeah. Hopefully in that scenario, though, if you spend that time, work out the prompts for this, we call product iteration and we're doing another iteration down the road, that won't change too much, maybe some specifics, but yeah, that's why there's so much hype and a great tools that I'm amazed at how well they write. I’m not always happy with their factuality, but they're pretty good still. But yeah, it's, it's not refined yet. It's got a ways to go. That's why The AI, like, where's The AI so it’s…

Dave Erickson 40:53
Well, in, in UX and related to UI, probably one of the bigger or probably the, the job that in the future is going to be fairly required is, you know, integrate ChatGPT, or any other kind of Chatbot into a website, a mobile app or some product. Is there rules or things that you need to kind of do from a UX perspective for developing that or making it so that the Chatbot actually functions correctly? Or works within a UX structure?

Ken Lonyai 41:33
Well, funny, you should ask that question. I just talked to an engineer friend this morning. We're installing Chatbot on my website. It's powered by GTP 3 Turbo right now. It's limited to three topics, my three topics are innovation, product management, and UX. If you ask it, How do you feed a cat? It's gonna say, sorry, I can't help you with that. So it's a proof of concept. It'll say things, its name is InO, same as the innovation. And I'm not guaranteeing what it's going to say, If you have, if it tells you something, It'll be up and running soon and you question it, then you can ask this bot to contact me, there'll be no more contact form on my site, you'll ask no to contact me. And she, it, whatever you want to call it, will, will do the work and contact me. So if InO tells whomever something that's incorrect, then I'll see what InO has to say and see if it was right or wrong. In my opinion, again, there's a lot of gray area to this stuff, but so, in thinking this out, the prompts that are built in to control you know, what seems to be this UI of InO. And I don't even know if I was going to reveal what's behind it. So now you guys got the scoop. I mean, you guys have to switch to BART just. It's also built so we could plug one in or take another out. So maybe it will change that over time. But so part of it was engineering the prompts to get back what we want, which is not the hardest thing. Part of it is what's the right temperature. We just did a test yesterday, turning up the temperature book, because there's the detection of, just send a message to Ken. So we can be very literal, you have to say these words and that's the way NLU was not very long ago. Uh, 8, 10 years ago, it's very hard to train. Again, when I was at the bank, I wanted to transfer money. That can be said 20 different ways or so. It has to understand every one of them and then all the steps in transferring the money each time has to be trained on every little, every little aspect of from, to, what amount, when, all those things, what, what vehicle, like Zelle, or Wire or whatever it is. So now these things are pretty good but if you change the temperature, how inventive it is, and understanding and responding. So the thing I talked about, this thing I said you know, how to feed a cat. We turned up the temperature on the like, I don't understand that and it was given a true answer that it didn't understand. But in a way that was not acceptable to me. And it was kind of saying goodbye. Well, I can't help you with that, so long, like woah woah woah, we don't want to stop the conversation because someone asked a question, possibly testing to see what it does. So don't say that. So, so, this is a long winded answer, I know, but it's, there's some tweaking of, on the prompt side and the control of, of the, what you're feeding and what you're getting back in terms of UX, in terms of the content. We're talking about generative stuff here. We cannot fully control the content, especially in a conversation that you don't know where it's gonna go, which is, it's been the problem of NLU of conversational UIs forever because of what I just said. When I was at the bank, my conversational designer was asked by the training team, Oh, give us 200 examples of each utterance that's possible for each step. And he was like, How can I do 200? I mean, just when you get to 20, it starts getting tough. Again, too little nuances just to come up with something. So, but you don't know where a conversation can go, even if it's a legit conversation. My thing is, I'm always trying to break everyone's AI or NLU, just because I'm testing to see how good they are. It's not because I'm malicious. I just want to know, like, hey, you really figured this out, this is really good or like this is a lot of work needed. So…

Dave Erickson 45:53
It's really interesting, because you're kind of defining the dataset or the learning experience, or going through the process of teaching the Chatbot as part of UX. And now that I think about it, yeah, it is because the user experience is literally the answers that the user is getting. I also don't think businesses realize they think, Oh, I'm just gonna put a Chatbot on my website or integrate it into my mobile app or whatever. They don't realize the hard work, I mean, integrating and doing the development to insert a chatbot into a website, it's not really that difficult. The difficult part is literally, you got to make a data set that the thing can learn off of right. So it gives the right answers to the right questions. And I don't think people realize how much work that is, right. And it's a lot of work to generate those data sets. And like I said, I didn't think of it as user experience. But yeah, it's directly user experience, just that process.

Ken Lonyai 47:01
Yeah and the edge cases and the breakpoints are what makes the user experience or not, because a simple question like, What hours are you open? Okay, not too hard, but if it starts saying, Well, what hours are you open? When we're on, I’m just trying to come up with a good example, when I go to this location? I'm sorry.

Botond Seres 47:26
I mean, you could just ask, what hours are you open Next Christmas, or the next, public holiday Right?

Ken Lonyai 47:33
Right. So you have to start thinking of all these edge cases. Yeah, I did, not even an AI thing. It's a great point. I was at a different bank designing..

Botond Seres 47:42
You see, that’s exactly where Google Maps fails me every time. Like I try to look at a business place open. That it just says, Eh, it might be, might not.

Ken Lonyai 47:50
Yeah, or typically. Yeah. Yeah, so it's all those little edge cases and things. That's always what gets neglected. So yeah, you can go find a bot, free, 50 bucks a month, someone can stand it up in a day or less and now we have a bot. So we're cool. Now we’ve got a bot, but no one tests all these things. I shouldn't say no one, but typically, these things are tested to do some tests, like, wow, we have a bot. And some bots are still probably fading away. A tree structure. So, a decision tree, so it's a path. So you get something out of that. It's going to give the correct answers to its path, but you can't deviate from that. And most people's needs probably go beyond that path and there's a gap right there. So now there's a problem and you can't get anything out of it. So you just go down this path, get to nowhere, now have to go seek out a way to get to a human. So, it just wasted your time, does not make for, for good UX, and say like, the money transfer thing. If I go to a bank, I log into the app. I want to transfer money. It really if it's something I can't do myself, it really does not matter to me, if it's a human or bot executing. I just need to transfer the money and that's it. So the, the, how cool it is and how much effort, the user does not care. So the user wants what they want, and they want to move on. So yeah, you're absolutely right. So those little things are UX, for those reasons, because when it doesn't go the way they want, they're not happy, they're not delighted.

Botond Seres 49:26
I do wonder Ken, like you had mentioned decision trees, but another favorite abstraction of mine is State Machines. And I feel like they, they could be or maybe are an integral part of UX. But I would like to hear your thoughts on this.

Ken Lonyai 49:44
It really depends on what's being built, how. So like I would never have built a decision tree but I was up for this role one time as VP and something at a small bot company, and they were telling me, People don't want NLU, they want decision trees, that's how we built their bot, but I don't think this is gonna be the role for me. So, but using, using State, it really depends on what you're up to, how it's going to be built, do you have a specific example in mind?

Botond Seres 50:16
I mean, sure, we can go back to the example you mentioned before, where I would be talking to a chatbot, essentially. And then when it can't help me, then we get to another state where it just puts me in a queue. Or from my side, the State Machine could be like navigation in an app, which I think is often neglected. If I go to this menu, I just cannot close it anymore. Like there's no X no nothing. And we start stuff like that.

Ken Lonyai 50:51
Yeah, so that, I wouldn't call it an edge case. But that is where the lack of testing has created that problem. But so that's why I mean that my, my answer is always I've always driven the NLU thinking part. NLU has drastically changed in the last year to two years, mostly the last year, because understanding it is the best way. Prescribe, prescribing a path is always going to get to a locked door or a dead end or something. But again, people might say or want anything, as Dave was saying, the training is immense, to get you there to be able to handle that. So we're kind of on the cusp of that with the LLM’s and each week, well, this one is now trained on a bigger data set and is now its a data set race, which presumably makes it better. I wonder if anyone has validated that. There's probably a certain level as you get bigger, may not really get better. That's my Inkling.

Botond Seres 51:59
But is it possible even to fully validate an LLM?

Ken Lonyai 52:06
I would think fully, no, you just have to use your test cases and get a level of confidence. It’s still no different than hiring a customer service rep that is allowed to go off script. If they have to have scripted answers like say in Pharma, they really have to read what's on the screen. But if you have someone that you allowed to at least, just have bullets on the screen or just freewheel it, you really have to have trust and they weren't drinking this morning.

Botond Seres 52:35
You know, I feel like, if an LLM would handle my banking transactions and whatever support for my bank, I would just feel like there's, there's this intern who started yesterday after a huge bender, and whatever I asked him to do, he just replies, trust me, bro, I got this, this energy, I guess from LLMs.

Dave Erickson 53:01
I think one of the issues is, is that as people are using, quote ChatGPT, they're now gonna go and when they see some kind of chat bot, they're gonna think that they can kind of interact with it like they would ChatGPT right. So, example, they come to your website and you know, yes, if they say, what hours are you open? That's a, that's easy to put into an FAQ and teach your chatbot to do that. But what if they come to your website and they say, I have this idea for this product and it kind of is like this, how can you help me? Right? It's a very broad, ambiguous kind of question that actually was given some information, some context was given. And it's asking you, How can your company help me with this concept? That's a very broad, and it's gonna be really hard to make a dataset that can do that. But at the same time, ChatGPT could get the concept, it would go to your data set and I guess, if you programmed your dataset correctly, you would have enough information about your company and what you can do that it would be able to produce an answer that actually might make sense, right?

Ken Lonyai 54:22
A Ted theory, yes. So that there's two things to unpack there. So one, all these LLMs L L Ms are trained on public data and that's what I was asking Bard in a way. I really wish I'd saved that. And I did ask a follow up question. So if you say, that type of question, these LLMs are going to look at what every human has put out there somewhere on a website or written an article. Company Information can kind of give you this detailed but generic answer that may or may not apply. So you need your own data set. The problem is, very few places have sufficient dataset that an LLM is going to be able to read through and go, Oh, I can really answer this. I mean, it's not even thinking. But to be able to parse all that, it can write something, but I think it's going to be eloquently gaping. It'll have nice verbiage, but when you've read the det, like, but wait a minute kind of thing. So some very large companies, I'm thinking, just making up an example, an insurance company. So they have lots and lots of data that they can train on in terms of products, so depending on how they expose that data, through their, their products and services, you can probably get a fairly good answer. Like do you insure, if this happens? I think it can respond and say that, because they have huge data stores that are very limited in scope. Even they have maybe, who knows how many different types of insurance, but for each insurance is limited, and it's like a verdict on that data. If you get to something that's much more broad, think of even a car company. Someone asked a question, and it looked across all the different models of cars, like Oh, yeah, you can get the Twin Turbo version and you're looking at some budget electric car, it might confuse these things. So you have to, it would have to have the mechanisms say, well, which car are we talking about and sort of query back. That stuff's not there. An LL, L L Wow, I can't say it today. GPT type of, how about that, type of models. So that's why I still say it's nascent, it's early days, but the direction and what you're talking about, yeah, it's, you definitely have to move these things. If you're a business to your own data set, to ultimately be effective about your own products and still be careful about things like the temperature, how much it says. I've worked in regulated industries like banking and pharma. Particularly pharma, I don't think they'll ever go to that, because literally one or two words, one or two words can create a legal liability. So it's just not gonna work there. But if you're a T-Shirt company and you have a lot of data, of course, someone's gonna come in there and say, Hey, can you print this on a T-shirt, and they have the guardrails for that as well. But so that's where there's still a lot of growth in this. So when you get out of this hype cycle, we went through that with 3D printing, for example, and AR when it first came out, which was good, but for both of those things, it was like, oh, everyone's gonna have this in their home very soon. I don't know many people that, that are not geeks that have these things at all. Back then No, no. So this hype cycle died down. We'll start building from what there is, it'll get better.ou start to get into, again, the gaps, the edge cases, the breakpoints, and find out that these GPT LLMs, don't have the ability to really do that. You still have to build other pieces. So it's not as simplified, like you were saying, I'll just throw a chat bot on the website. And we got it.

Dave Erickson 58:29
It's the data. It's the data sets. It's all of that. But it has some potential. I agree with you that the data set is part of UX and, and chatbots are going to be part of product development and integration into product development. And

Botond Seres 58:47
I have two questions, if you don't mind. Like, one of the things I feel that could be one of the best applications for GPT likes, as if I'm doing support, customer support for a company that's mainly text based. I feel like, there is a amazing opportunity there to take all of the task transcripts ever, like going back 10, 20 years, train the LLM on that and just, you know, have a first line of support with the disclaimer that Hey, this is an AI, like, press this button if you want to get connected to an operator, but it will try the best it can to help you out. I do wonder if, can you be an expert on these things? You can maybe think about other industries that would be like, sort of a good target for early adoption of these technologies.

Ken Lonyai 59:49
Yeah, well, I agree with you. So definitely first support, definitely with a disclaimer and controlling its inventiveness, again through things like temperature setting would be good. But that, that's across any industry. If you're talking about other use cases, it could be sales, at least the initial part of the sales, particularly where there's not selling enterprise things. I don't know that any salesperson wants to be represented by a bot initially, but if you think of, think of a good product example here with let's just say, sneakers. Maybe there's a website that sells 200 types of sneakers, like one of these dedicated sneaker retailers, not necessarily the brand itself, but they have 200 models of sneakers and you could get things going so it may get into a very deep particular question, I see you switched from a Polyurethane heel to poly something heel, why? I'm not even sure if a sale, human salesperson is going to know that question. But yeah, so it could be used at, at a certain level of sales, definitely. Right now, the whole change in search right now is something I've complained about for a long time. But just a little side note, I'll make it about me, but I think it's all UX people, we have sort of this ability to question slash complain. The biggest word that a UX person uses is Why. You see something like, Why the hell is that button over there, it should be there and you start thinking. There's always the question: is, this thing an aid to UX people? Sure, I'll get some hate email soon, but so you can use this like, as a sales tool, and try and get it to figure out what people might be after. What, how we can answer? But again, there's still that limitation and you can't probably use it for things that really require understanding the customer to sell, it has to be something at least initial surface. So the salespeople that sell sneakers, which I'm not maligning them, but they're not going to know the motivation of why the company did this. It's very unlikely unless they're really studied some hard to find data, so that they just won't have that information at hand. So those types of levels of things. And again, I'm not saying anything about the people involved there. But versus, Hey, I sell nuclear power plants. Well, no bot is gonna start that conversation off, like, Hey, what kind of nuclear are you looking for? How many kilowatts do you need? I mean, there's this extreme other end. But all I'm saying is that bigger and more challenging things probably will never start, start with an AI process. That's always going to be human to human. But

Botond Seres 1:02:57
I think what you're saying is the lower the risk, the better.

Ken Lonyai 1:03:02
Better, much better way to phrase it, than I did. Thank you. Yes. Right. It's a risk type thing and level of understanding. Most of the questions around the sneakers are going to be availability, size, price, shipping, those kinds of things, comfort, you can use the AI to summarize those types of things pretty well. So that I would call low risk and that can be lots of categories.

Botond Seres 1:03:34
That's great. And traditionally, I'll ask you the last question, is: what does the future of UX look like?

Ken Lonyai 1:03:49
Oh, I can't predict that I don't think technically it’s going to change too much, because it's still the same underlying process. If it, if we automate it, even if you can have an AI that can do the research and ask the questions, we still have to do those steps to understand what we're building. We're never going to say let's build a product, but we don't need to understand who's going to buy it, who's going to use it, what's going to make them happy. The basics are the basics. Just like if you want to be the best athlete in some sport, it's always the basics that coaches harp on; it's not some trick shot or something like that. So I don't know that the UX process is going to change until the point that the AI is so intelligent that it's just selling to itself or doing something for itself, then it might be a different kind of AI but when there's humans in the loop, especially as the the end user type of thing. The process won't change, but the tools definitely will evolve. The AI will definitely infiltrate if that's a good word to use here and be helpful. I do agree that it's not ever going to be the, the only way to do it, that it’s human augmented by AI, but there will be some shoving back and forth, people eliminated, probably people brought back in when we find that, yeah, we still need people. This is this early days evolution thing. But I think that's what UX will always be.

Dave Erickson 1:05:18
All right, well, well, Ken, how about maybe you can talk a little bit about what your, what your consulting does and what you do, and how people can get a hold of you if they wish to do so.

Ken Lonyai 1:05:32
Sure. So the best way to get a home is Kenlongyai.com. In the future we will have two podcast series. And one will be about AI, one will be about product management. So I try and keep those two separate. And no one wants the Ken Lonyai show, no one wants to tune into that. Maybe my mother. So there's another website, AI product guy.com. But right now, there's placeholders for the, for the future podcasts there. As far as what I do. I mean, my thing is, whether I'm consulting or working in a company, is just to create the best products to use the skills. Again, this whole, InO products thing, I just really changed my name to that. I do not fit well into any category. So I'm always that square peg. So if there's a UX need, it's a round hole, if it's a product need, it's a triangular hole, and I'm the square peg because I kind of spend all this, I'll just leave it with this little thing. I always say that when I was a kid, kids wanted to be nurses, doctors, firefighters, astronauts, not a single kid ever said, I want to be a product manager, so you get to these paths in unforeseen ways and it brings together this mix of things. So, so I do all that stuff. Happy to talk to people, help them out, make better products, create delight.

Dave Erickson 1:07:00
Great. Well, Ken, thank you so much for chatting with us about the components of the successful UX strategy, and your thoughts on integrating ChatGPT and other chatbots into the UX process and design.

Dave Erickson Outro: 1:07:30
For our listeners. Please join us in the first week of each month for another ScreamingBox technology and business rundown podcast. Until then, may your user experience of life be graphically pleasing and intuitive. Thank you very much for taking this journey with us. Join us for our next exciting exploration of technology and business in the first week of every month. Please help us by subscribing, liking and following us on whichever platform you're listening to or watching us on. We hope you enjoyed this podcast and please let us know any subjects or topics you'd like us to discuss in our next podcast by leaving a message for us in the comment sections or sending us a Twitter DM till next month. Please stay happy and healthy.

SUMMARY

Introduction to the podcast. (0:04)
In today's market, it is critical that products need to be great to stand out in a crowded field and be successful.
Get notified when next month's podcast is released.
Integrating chat gpt and other chatbots into the UX process and product designs is the key to building successful digital products going forward.
Key considerations for defining a solid, effective, effective UX strategy.

What is UX and why is it necessary? (3:45)
UX is critical to the success of a product, and the UX process is critical.
The first step in developing a quality UX is to know who the customer is that you're targeting, and then start talking to those customers or potential customers.

How to get feedback from your users. (6:49)
What the user is using now and what they don't like about the product is part of the questioning.
Once you get prototyping, you also want to get feedback from the user, especially if you have trusted potential users.
The green field is a different ball of wax, and it is harder to start a UX process in.
One of the most fun things about UX strategy is the smoke and mirrors that go into things, like the like button on Facebook.

The problem with wait time indicators and queue messaging. (12:18)
Chunky bit gives an example of how chatbots can be the first initiation of conversation or dialogue, but it can't do everything.
The goal of UX is to be delightful, not frustrating.
What the end result of a good UX is, and what the characteristics of a product that was done with a great UX are.
How to measure a product's UX.

Why not a persona for salespeople? (16:31)
Bob is a 40 year old salesperson at his company who wants to make commission on his lifestyle.
The typical cycle in UX development, and why it could be quicker.
All roles should have a level of understanding of all aspects of the product, including UX, product, engineering, sales, marketing, etc.
Having a better understanding of the iterative process and how to speed it up is important, starting with internal communication.

How many people to interview and how long to interview? (21:37)
It is a standard when starting a UX process, how many people to interview and how long or how deep to go in the interviews.
The most well-known UX people that talk about interviews and testing on the testing side usually say 50 is the max.
The UX firm is talking about as little as five people, and they probably know more than me, but it never feels right to get a sense of that there's statistics behind that.
The art is trying to know what that is, and how big a change it is.
It would be very hard to get talked to a lot of people and not necessarily get feedback.
If there is one interview and there are 1000 participants, does it count as one interview in the five interview rule?

What is the deliverable? (28:28)
The UX process is better to do the UX process before any product development is being done right, because part of theUX process is getting to understand what the product should be.
When going through a UX process, what is the deliverable for the next step of the process
The PM is the one who owns the product. The UX person is basically talking to everybody figuring out what it is that they need to make to have the best product.
When the product succeeds, the team succeeds, and when the product fails, it's 2pm.

How do you ask the right questions? (33:58)
There are gray areas everywhere, and there is never a place where there is a full staff needed to do a proper UX.
AI may be an option for filling some of those roles.
There are a few articles on how to use chat gpt, but there are also a lot of other articles and videos about chat gPT.
There is a great starting point for those who want to get started in the field, but still need to review the questions.

Using loremipsum to summarize results. (38:19)
Everyone uses lorem ipsum, so it's a horrible idea to try to get detailed response text in a layout.
How to summarize results to get a quick summary.
The most resource intensive part of using an event is setting them up to produce the results that you're looking for, especially in the format.
Three topics of innovation, product management and UX.

UX rules for developing a Chatbot that actually functions correctly? (40:53)
A proof of concept is being developed, but the name is not the same as the innovation, and it's not guaranteed what it will say if it tells you something incorrect.
A test was done yesterday, turning up the temperature of the temperature book to get the right temperature.
There is some tweaking on the prompt side and control of the what is being fed and what is getting back terms of UX in terms of the content.
Adam is always trying to break everyone's ai or nlu just because he is testing how good they are.
The edge cases that get neglected. (47:34)
You have to start thinking of all these edge cases. It's all those little edge cases and things that are always what gets neglected.
State machines are an integral part of UX, but it depends on what's being built. It really depends on how it's being build.
The state machine could be like navigation and an app, which is often neglected. The lack of testing has created that problem.
The annual youth thinking part has drastically changed in the last year.

How can a chatbot help a customer? (53:01)
One of the issues is that as people are using chat gpt, they are now going to go and see a chatbot, they're going to think that they can interact with it like they would chat gPT.
The problem is that very few places have sufficient data.
There is still a lot of growth in the industry, and the hype cycle has died down, but there is still room for growth.
There is a lot more room for companies to move to their own data set to ultimately be effective about their own products.
The data set is part of UX and chatbots are part of product development and integration into product development.
Other industries that would be a good target for early adoption of GPT.

UX and how it will evolve in the future. (1:01:11)
UX people have the ability to question/complaint. The biggest word that UX person uses is why.
The future of UX is low-risk, lots of categories, including sneakers.
Ken talks about the future of UX and how it will evolve in the future, and how people can get more information about what he does and how they can get a hold of him.

Contact Info and Outro. (1:05:32)
The best way to get a home is kenlongyang.com, where there are placeholders for the future podcasts.
Ken describes himself as the square peg. He is happy to talk to people and help them make better products and create delight.
Join the next screamingbox technology and business rundown podcast.

Creators and Guests

Botond Seres
Host
Botond Seres
ScreamingBox developer extraordinaire.
Dave Erickson
Host
Dave Erickson
Dave Erickson has 30 years of very diverse business experience covering marketing, sales, branding, licensing, publishing, software development, contract electronics manufacturing, PR, social media, advertising, SEO, SEM, and international business. A serial entrepreneur, he has started and owned businesses in the USA and Europe, as well as doing extensive business in Asia, and even finding time to serve on the board of directors for the Association of Internet Professionals. Prior to ScreamingBox, he was a primary partner in building the Fatal1ty gaming brand and licensing program; and ran an internet marketing company he founded in 2002, whose clients include Gunthy-Ranker, Qualcomm, Goldline, and Tigertext.
User Experience strategies to make products to stand out and be successful
Broadcast by