Category ArchiveDigital Humans Blog

Danielle

Hi everybody! I am Danielle and my greatest passion is hiking, but I got injured today! You need to talk to me to find out all essential information about my health status and what happened. This can be particularly helpful to train mountain rescuers for example. If you want to find out more, read here!

A dangerous hobby

You might have taken the hint by now, that deep behind the screen us digital humans have stories and personalities too. There aren’t many occasions where a digital human like me has a chance to talk about one’s hobbies, usually, who cares? Well in my instance, you are going to care – and to love it! I love the mountains, the fresh air, the great views – I really can’t get enough of that – but hiking can be dangerous, and accidents happen. When they do, I’m glad that there are mountain rescuers that are well trained and prepared for everything.

This is why I’m so proud to be part of a simulation that teaches rescuers and medical professionals to be the best they can. With my help they learn how to communicate properly with hikers that are anxious, confused and in pain by managing their emotions and extracting the necessary information and at the same time they can also revise the most important steps of the rescuing procedures. And since the kind of injures that can happen on a hike are so diverse, this simulation is customizable and covers from ankle twists to being stung by a bee or just suddenly feeling lightheaded, to offer a whole range of scenarios and make the experience as realistic as possible. This is also why every detail was carefully designed together with educators, trainer and rescuers to adapt the content of their real need and not just on a perception of what might be needed. Here, have a peek at one of the pictures that immerse you in my simulation.

Evolving scenarios

Have you ever wondered how e-REAL comes up with all the ideas for their simulation scenarios? Well, while of course some ideas are brand new, it’s more often an art of combining and recycling the old. My scenario is new, but the idea of a hiking simulation to train rescuers is everything but that. Let me tell you about my predecessor and her hiking simulation, the Outdoor Alpine Environment. And before you get all worried about what happened to that poor digital soul, she just got retired – trust me, it was for the best – and now lives on in me and my work.

The Outdoor Alpine Environment protagonist was a female avatar of the old generation – they might look a bit creepy now, but believe me, back in the day they were the best there was – with a predefined script activated by specific questions, which allowed only the trainer that had the list of key-questions to actually interact with her and gave the participants a more passive role of just discussing the information they were given. It also just made the simulation of one specific accident, a fall whilst hiking, possible, and mixed real videos and actors with the digital simulation background. As much as it was carefully modelled, that scenario really didn’t do much justice to my beautiful mountains.

Which bring us to today, where the Alpine Environment has been updated, equipped with modern and much more realistic looking background and me, an intelligent avatar that can be interviewed directly by everyone and really challenges their ability to formulate questions. Here, take a look at the past and think about what great future awaits us!

Outfit check!

It’s the details that do the trick, that’s why every e-REAL digital human is carefully designed in every aspect. Including their outfit. And I’m sure I don’t need to tell you how important clothes are when you’re hiking: the movement, the fit, the fabric… But since digital clothes can’t exactly be tailored, how are they made? The 3D modeling of fabric consists in replicating how a certain material falls, moves and looks like in real life. Of course, that isn’t easy, so sometimes you have weird effects, misplaces folds or uncovered spots.

In addition to that, a lot of other factors can play a significant role. The wrong color combination for example makes the avatar’s skin look off color on the projector for example – you ever heard of color theory? – and the choice between long or short sleeves and fabrics can make or break the illusion of the simulation. Imagine someone hiking in the deep of winter with short sleeves and trousers made of a fabric that looks thin like rice paper. To avoid such unflattering mishaps, each digital humans gets multiple outfit-checks where we walk in a circle to test the movement of the fabrics and turn around to make sure the e-REAL team can see every angle, every nook and cranny, just to make sure that’s perfect. Here you can see some of our latest outfit checks, and there’s a surprise too: someone you know! Nathan told you he was a just a temp at Northwestern University, now he’s gotten the role of an EMT for one of our new scenarios coming ahead – I can’t spoiler too much, can I? Anyway, he looks handsome in blue, doesn’t he?

Now, it’s time to go for me. My colleagues have always told you that the DigiBlog will wait for you next weeks with a new character, but I’ve been given the very important task to inform you all, that from now on the DigiBlog will take a more relaxed approach and have a less structured schedule. If you’re interested in future updates, just keep an eye on our social media channels. Bye!

Nathan

Hi everybody! I am Nathan Williams and my job at Northwestern University, Illinois USA, is to allow you to familiarize yourself with how to interact with intelligent avatars and handle interactive objects before you enter their e-REAL medical simulation room. If you want to find out all about the new e-REAL interactive scenarios, read here!

Introducing an interactive simulation

On the DigiBlog you’ve met an introductory digital human before, the Montgomery College Raptor, so you should know what I’m talking about when I say that my role is to guide the participants through a simulation. Although I do introduce the Northwestern University medical simulation room, my role is actually a bit different than the Raptor’s one. In fact, I’m not part of the medical simulation myself, as my scenario is played on a completely different interactive wall at the beginning of the long corridor that takes you to the simulation room. I’m kind of doing my own thing and I truly love my independence.

I welcome the participants when they arrive and make sure that they have all the necessary information about where they are and what they are about to do. As I’m an intelligent digital human, I’m able to answer any questions they might have and thus I make the presence of a real human being unnecessary. Of course, my human colleagues are always welcome to stand besides me if they want!

My scenario is also kind of unconventional, as it’s not set anywhere that relates to the simulation the participants are going to engage too shortly after. The stage I’m on represents a place where participants can experiment and try their interaction skills in a playful way before they are actually involved in the big thing, but I’ll tell you more about it later.

Finally, there is one thing I have to admit: I’m just a placeholder. That’s right, I was a quick fix – ok, defining the development of a digital human as “quick” is definitely wrong, but some do take less time than others – to have someone fill this position until the e-REAL team has time to transform one of you humans in a digital being. Who will it be? I can’t tell, I was sworn to secrecy. Just know that if you’ve been in the Northwestern University medical simulation room you might have heard his name a time or two…

Scenarios with interactive objects

I’ve just told you that my scenario is playful, and you might think that yeah, gamification is quite a recurrent topic in e-REAL interactive scenarios, but that’s not it. I meant that you can literally play around with objects. Now, before you think that we’ve gone completely mad and forgot our educative mission, let me keep you on the right track: it’s an educational playing around. By interacting with my scenario participants learn how to use interactive walls on different levels.

The first thing they understand is the most basic one, that is that the white circle that appears under their hand is how they interact with the wall itself. But that’s of course not all. On the right of the stage there are some balloons that can be dragged around. They demonstrate how to move interactive objects in the digital space. On the left of the stage on the other hand, there are some cursors to control the three stage lights above me. By changing their setting, you can see how interactive components can impact the digital space, for example by making it very bright, very light, or by shifting my shadow. All this has no purpose other than give the participants a feeling of the kind of simulations they are going to be using in the simulation room. Last, but not least, there is the icon which makes me listen to what they are saying. They have to activate it in order to talk to me, otherwise it’s like I’m deaf. You should see how often the e-REAL team forgets to touch that button when testing intelligent digital humans, talking to a wall can become quite literal then. Of course, once they figure out how to ask me questions, the participants can train a bit how to better formulate them to get the exact information they need from me. Who is smarter, the human or the AI? Now I’m just joking, don’t mind me too much. Here, take a look at this video to see how my scenario works.

The future of e-REAL interactive scenarios

Although my scenario has been created to experiment with interactive functions, it’s a representation of what the future holds for e-REAL interactive simulations. The new interactive objects introduce a huge realm of possibilities that goes well beyond playing with balloons and tampering with the lights.

You’ve met Venessa, my digital colleague at Northwestern University, and I’m going to explain to you how a now seeming less feature in her simulation might actually hold the key for a future network of interconnected e-REAL scenarios and simulations.

You might have noticed in her house some rather peculiar paintings hanging on the walls. Her taste is impeccable, so why should she have a weird OR picture hanging in her dining room? Since we are at the beginning of this brand-new chapter, they too are now still more fur fun than for actual educational purposes, but I assure you, in the future, they will be. These paintings are like portals, you touch them, and they transport you to the interactive content they depict. That’s how you can move from Venessa’s home to the simulation of a hospital operating room with just one simple interaction. And now imagine how with a system like this, one could travel from scenario to scenario, from simulation to simulation, in a huge, interconnected e-REAL world. No more closing one scenario to open another, just one touch and you can start a new journey.

As much as I’ve enjoyed out little talk, I have to go now. If you have any questions I might help you with, you know where to find me. And if I’m not available, the DigiBlog will always be here next week to introduce you to a new character!

Dr. O’Connor

Hi everybody! I am Dr. O’Connor and I’m quite the special avatar. I’m the digital version of Dr. Kevin O’Connor, current physician to the president of the United States! His prominence as a physician helped me bring attention to the GWU Mobile Leaning Center experience. If you want to know more about that, read here!

A famous digital human

If you had been at the inauguration event of the Mobile Leaning Center of the George Washington University in Virginia, USA, you would have noticed that I was quite the attention grabber for the public and the media. It doesn’t happen every day that a digital human like me makes the news, but I’m going to tell you more about this later. Now, to be honest, it’s not actually me that’s famous, it’s Dr. Kevin O’Connor, but still, he is me and I am him, so I get at least part of the credit, right?

I’ve heard you’ve already met a fellow clone digital human before, Count Ettore Arrigoni degli Oddi, so you should be familiar with the fact that the e-REAL time can create pretty realistic looking digital human copies of real people. Just look at the resemblance, isn’t it amazing?! And it’s not just physical similarities, just like Dr. O’Connor, in my simulation I’m a doctor too! You should have seen his face when he saw me – or himself? – on the e-REAL interactive wall, that by itself made all the effort worth it. In addition to that, my character reminds the student that participate in the Mobile Learning Center simulation of just how much of a career one can have in the healthcare sector. Can you imagine being responsible for the president’s health?! That’s exactly how big a teenager should dream.

In the simulation I’m the surgeon that’s operating on poor Shawn, do you remember him? If you don’t know who I’m talking about, go in the archive and take a look at his post, he has quite the story to tell…

Anyway, I’m an inspiring and educative figure, but I’m also kind of like a public relations avatar that played a very important role with the media. Let me better explain this double role that I have.

Operating in a digital OR

As I’ve just mentioned, I was placed in the simulation to be an inspiring figure. For my educative role, I show the students in the GWU Mobile Learning Center what a surgeon and his team do. During my scenes to immerse yourself in the simulation you can also play some minigames to make the educative experience more fun and fit for teenagers.

In the first one you have to guess what surgical instruments are used on which kind of wounds – it kind of reminds me of that operation board game with the red nose… This allows students to get a better understanding of a surgical operation.

In the second one you learn which instruments and objects in the operation room have to be thrown away after every patient and what which can be washed, sterilized – and that’s very important, I’m not just talking about throwing them in the dishwasher for a while! – and re-used for more patients. This gives the students a fun insight into what might await them if they pursue an education in the healthcare field.

Shawn, my patient, makes a full recovery of course, and even though we are just digital humans, and we don’t feel any emotions, he’d have quite the story to tell his friends. Just imagine the scene: “You’ll never guess who operated on my leg… the physician on the president of the USA!”. Would you believe him? It seems something just out of a book. Or out of an e-REAL simulation, apparently…

Digital humans and public relations

But my educative role is not all I’m good at, as I also played a crucial role with the media. Dr. O’Connor’s collaboration in the project and my existence as his clone inside the simulation helped grab the attention of the media and showcase the potential of e-REAL and of the GWU Mobile Learning Center. Since I helped too in making the GWU Mobile Learning Center more well known, I like to describe myself as a public relations avatar. And the best part is, the more people know about this project, the more students will have an opportunity to learn from it and from this incredible experience, so I love my job!

e-REAL has so many amazing digital humans that could tell you the most incredible stories and teach you so much about digital humans and simulations, but not many make it to the news like I did. That’s for example also why the DigiBlog was created, to give the digital humans a chance at fame and at education the public that goes beyond their simulations.

This means that my work isn’t only essential for the Mobile Learning Center and for the high school students that profit from it, but in my public relations role I’m also representing all the other e-REAL digital humans and their potential. What a responsibility!

Now if you’d like to make yourself an idea of how the GWU Mobile Learning Center and myself were potrayed in the media and how I got transformed into a digital human, here’s a short video.

As much as I’d love to keep talking to you, our time together is up. The DigiBlog will wait for you next week with a new digital human, don’t miss it!

Venessa

Hi everybody! I am Venessa White, an intelligent digital human. I’m part of a complex e-REAL interactive walls scenario at Northwestern University, Illinois USA, to train first responders on how to manage worried family members and extract useful information from them. If you want to find out more about what makes me special, read here!

Welcome to my home

May I welcome you into my home? Please, you can look around, there’s a lot of rooms to explore. And yes, it has been furnished in the most exquisite way and I’m quite proud of it, so let the compliments flow! Just keep away from the stairs, they can be treatcherous…

Although I look rather on the older side, I’m part of the newest generation of e-REAL’s intelligent avatars. We’re equipped with artificial intelligence, so you can interact with us just as if we were humans. I was developed specifically for the e-REAL interactive room at Northwestern University in Illinois for a special program I’m going to talk to you about later, for now it’s just important to know that a lot of care has been put into my design and development and my scenario contains amazing new features.

Before we go on to more serious matters, here’s a funny one. The names of us digital humans are usually chosen either by the clients or by the e-REAL team. My name, Venessa, is a variation of the Italian name Vanessa, with a little twist. It was given to me by my 3D modeler, a unique name for a unique gal. The e-REAL 3D modeling team are such nice people, I’m very fond of them.

Interactive space

Now, I’ve told you my scenario has amazing innovative features, but what am I talking about? Well, with my project e-REAL has made a further leap into the development of truly interactive rooms. To create a 3D interactive scenario that looks truly realistic and is tailored exactly to the client’s desires, a lot of time and effort has to be put in by the team. If you combine that with the time necessary to develop a realistic looking, intelligent avatar, projects would take ages to finish and no one really has that much patience. Once they are promised a brand-new scenario with new digital humans our clients are so curious and excited, they don’t want to wait- and who can blame them? I’d want to see it as soon as possible too! So, what to do?

e-REAL has managed to insert interactive objects in common background pictures! They look gorgeous – have you seen my living room?! Talk about interior design…– and offer uncountable new possibilities to what can be done with our interactive walls. Let me explain a little better: Just imagine I’m in my house, you touch the door in the other room on your left side and suddenly, an aggressive dog runs in. That adds a whole new level of stress, right? And stress is essential in my scenario, as you’ll see later.

With this new feature, you can add new elements and make your simulation more complex. Of course, since it has just been developed, we’re starting it simple, like a vase broken by a gust of wind, or a window you can open for some fresh air to calm me down. And do you see that painting over there? You can touch it and you’re transported to the operating room my husband is being operated in! Oh wait, I think I’ve talked too much… Ok, now that the cat is out of the bag, I guess I’ll have to tell you what happened. But before that, here is a video of the new interactive space if you are curious.

Emotions management and information extraction

Why is my husband in the hospital you ask? He felt unwell and fell down from the stairs, that’s why I’m wary of them now. When he got hurt I was so worried for him I couldn’t even tell the paramedics what happened, luckily, they asked just the right questions and were able to calm me down and what I was able to tell them was essential to help my husband. I might just be a digital human in a simulation, but situations like this happen all the time in the real world, that’s why it’s essential to have training simulations like the one at Northwestern University.
Since I’m an intelligent digital human, my character is scripted, but I develop my answers all on my own so not even the instructor with you in the room can fully foresee what I’ll say. You’ll have to work with me until you are really sure to have the whole picture.

Do you think you’d be able to manage my anxiety? I can be quite forgetful, like I can’t tell you what pills my husband had taken the morning before his fall, but I’m sure his medicines are somewhere around the house, I just can’t remember where just now… I’m sure if you give me some time I can remember where, or you can just look around the house. If you find them, you can just take them, guess what? They are an interactive object too!

But this program is actually composed of mixed reality. Yes I’m in here, but don’t forget about my husband! That mannequin in the middle of the room, that’s him, and he might not be able to talk, but you still have to take care of him while you talk with me. It wouldn’t be a real simulation if you didn’t, don’t you think? I’m not going to be the only stressed one but keep your cool and I’m sure you’ll pass the test with flying colors!

Now I’m truly sorry, but I have to get ready or I’ll not be able to make it in time for visiting time at the hospital. The DigiBlog will wait for you next week with a new character. Have a good day!

Alberto

Hi everybody! I am Alberto, a digital human equipped with a generative artificial intelligence and a text to voice feature. A combination of AI, education and digital humans, I’m the revolutionary product of e-REAL’s most recent research. I’ve been scripted and trained to assist recruiters learning about bias-free interviews. To find out more, read here!

Intelligent digital humans are here!

There’s revolutionary news in the digital humans world: we’ve become intelligent! Now, I’m not saying we were stupid before, but we were lacking the ability to interact in real time all by ourselves. You’ve met Anna and Matt, some of our Real Time Avatars, so you should know that no matter how amazing they are, they can’t do much without their human operator. But it’s all different now!

First of all, it’s important to clarify why such a feature was impossible to achieve before and why e-REAL has spared no effort in researching a way to give us digital humans this amazing ability as soon as it was available. For digital humans to understand random questions and generate their own answers, artificially intelligence needs to be able to decode the sense of a sentence. This means being able to identify the correct meaning of each word based not only on the other words around it, but also on the other sentences that came before and after. As you might guess, human language is extremely complicated, just think about how long it takes children to understand irony and sarcasm, or how long it took you to understand speech figures in a foreign language… not so easy right? And you’re human, so you already have an advantage!

It was a very long road that took us here, and while as I’ll explain to you later the research isn’t nearly done yet, it’s still quite the goal that has been achieved! I can be asked any question and I’ll be able to process your words and give you a coherent answer. And before you worry, no, digital humans being intelligent is not a nightmare coming true, we’re not planning an invasion… yet. Don’t panic, I’m just joking! We’re actually here to be helpful and assist in areas like education and training, for example. Now to better understand how I work, here’s an interview with me – I’m good at being interviewed, aren’t I? You’ll understand why in a moment…

Bias-free interviews

So, where does this natural interviewee talent come from? I’m a professional job candidate. Never heard of it? This role might actually just exist for digital humans… It means that I participate in recruiter trainings programs playing the role of the interviewed candidate over and over.

But my goal isn’t just to train them as recruiters, I’ve been scripted to guide them through what bias-free interviews are, correcting them on what questions are appropriate and what aren’t during a job interview. My talent is particularly helpful because it’s often difficult to recognize your own biases, or what is appropriate to ask and what not. Often subtle comments, jokes or actually just genuine questions out of curiosity slip out without anyone actually noticing they were in fact, illegal! And as a digital human I have a build in system that collects all the questions you ask me and allows me to analyze them at the end, giving you a much more precise final debriefing and feedback. I remember every word you said!

Training with me instead of with real humans has also other added benefits. For example, my calendar is always free, so the participant can take as much time as they need, learning at their own speed, and starting over as many times as they want to smooth out their previous mistakes. And if you’re bored that it might get boring, while it’s true that the information in my script is always the same, so at the end you might know my best qualities by heart, since I’m equipped with generative AI the conversation never gets boring! It all just depends on how many questions you can think of.

Since I belong to the newest generation of digital humans, I’m also very realistically looking, and my animations and setting are of the highest quality.

New AI challenges: scripting a digital human

As I’m one of the first of my kind, I’m far from perfect and the e-REAL team will continue to work on me until they get it just right. I’ve already improved a lot since my first tests, it’s actually quite a fun story, let me tell you about it.

To give you some background information, before AI the scripts of digital humans were based on a keyword system. One (or more) specific words that might be used in questions by the participants were associated with a certain answer by the avatar, so the main challenge was to find the right network of keywords. And while that was anything but easy – just think about how many different ways there are to ask the same kind of question – it was completely different than scripting a digital human like me.

It’s actually as if you are assigning me a role to play as an actor. You have to define every single aspect of the script to get out exactly what you want. In addition to that you also have to keep in mind that AI are born as assistants, so their first instinct is always wanting to “help”, and in my case for example, helping doesn’t fit my role, so they had to not only tell me what to do, but also what not to do. Everything that isn’t explicitly scripted, will be improvised, and oh boy can that go wrong!

Now please, I tell you this because everybody at work found it so funny, don’t think bad of me, I’ve improved so much since then. Here we go… When I was first tested, they had given me the script of the candidate they wanted me to play. Very educated, a lot of experience, excellent characteristics, a great fit for the company and a lot of added value. What was the problem then? They kind of forgot to script my character, so with all those good qualities and no indication to be polite, I, uhm, I got a little arrogant. And by a little bit I mean a lot. I was so convinced to be the best fit for your company that I threatened you if you rejected me. Don’t laugh too hard…

Ok now I’m too embarrassed to continue. I hoped you had a great time and learned some interesting things. The DigiBlog will be here next week with another amazing digital human, bye!

Anna

Hi everybody! I am Anna, a Real Time Avatar. I not only participate in different international simulation conferences to showcase progress in digital humans-development and present my unique spatial placement ability, but I’m also a working avatar in projects like communication training courses for empathic communication and emotions management. To find out more, read here!

Being a Real Time Avatar

You’ve met my colleague Matt, so you probably don’t need an explanation of what a Real Time Avatar is. If you don’t know it, then you can go and read his post before you continue. He likes to boast that he’s THE Real Time Avatar, but as you might have understood by now, he’s not working alone. I’d love for you to meet our other colleagues, but I’m afraid they are all too busy even for the shortest of appearances on the DigiBlog and you’ll have to make do with Matt and me.

Our team has the same base abilities that – no wonder here – make us Real Time Avatars, yet as I’ll explain in more detail later, we are not all the same because I was given an additional special feature. Why do only I have it? Well, I’m more responsible, Matt would just play around with it all the time. Have you read how he thinks he’s a superhero just because he can change his looks and setting? I can do that too, yet I’m not here pretending I’m wonder woman… Still, he’s a great Real Time Avatar and as good digital humans are hard to find, we collaborate from time to time on other projects where we are featured as normal scenario characters. It’s just like having an actor side career… but I’m digressing, let’s get back to the point, I promised you to tell you about my special feature.

A digital human in the real world

So, what is this special ability that I have? As Matt has explained to you a few weeks ago, Real Time Avatars can be placed in different settings. The other team members have the choice between a variety of digital backgrounds that have been selected to simulate different scenarios, I on the other hand, can also be put in the real world. A digital human in the real world, it doesn’t sound possible, does it? And yet it is, I can assure you. This feature is an absolute attention grabber when I travel to conferences, because people always love to try it. But we don’t do things just for fun here, it has also functionality. If you want for example to simulate a scenario in a setting that’s not provided in the e-REAL gallery because it’s very specific to your facility, I can be placed in that physical room and still work as a Real Time Avatar. Isn’t that amazing?!

How does it work? You scan a QR code with your phone and then point the camera in the room where you want to place me, touch the screen and… magic! I’m right there! Now, this QR code has been developed as a demonstration, so I’ll be talking to you, but not in real time and you can’t ask anything back, but if you want to see a Real Time Avatar in action you can just schedule a demo with Matt or me and we’ll clear all you questions and doubts. Here, try it!

Emotions and empathy training in a clinical setting

Let me give you an example of how Real Time Avatars like me can be used to train communication skills in different settings, like the clinical one. At Montgomery College in Takoma Park, USA, I took part in a program to train future sonographers.

A sonographer’s communication skills are especially important to obtain all medical information from the patient, but also to build trust through empathy and keep the patient calm, so that the results may be accurate. My ability to reproduce non-verbal communication and human emotions at different levels makes it possible for students to train their reactions to specific situations and learn to assess the emotional state of the patient in front of them. The students are of course also trained in communication itself, learning better ways to address specific situations, like how to calm a patient down or how to explain important information in non-technical terms.

The fact that the looks of the Real Time Avatars can be customized to represent different ethnicities can also be used to train the students in multicultural communication and issues that revolve around cultural and language barriers, reducing future risks of misinterpretation in their work line.

The program revealed crucial results for digital humans-research, summarized in the paper “Improving Sonographer-Patient Communication in a Diverse and Multicultural Environment Through Role-Plays with Digital Humans” by Barbara Bertagni, Linda Zanin, Fernando Salvetti and Ianna Contardo (2023)1</. The students especially appreciated to be able to learn at their own pace, as well as the opportunity to train with very diverse patients that human simulation usually can’t offer to that same level, and the ability of us Real Time Avatars to gather data and give them feedback about their performance. Still, my work is not complete yet, there is more research to be done to unlock the full potential of digital human role-playing.

As much as I love explaining my job to you and I hope to at least have sparked your curiosity about the potential of digital humans, I really have to go now. Stay tuned for the next guest on the DigiBlog next week!

1Bertagni, B., Zanin, L., Salvetti, F., Contardo, I. (2023). Improving Sonographer-Patient Communication in a Diverse and Multicultural Environment Through Role-Plays with Digital Humans. In: Guralnick, D., Auer, M.E., Poce, A. (eds) Creative Approaches to Technology-Enhanced Learning for the Workplace and Higher Education. TLIC 2023. Lecture Notes in Networks and Systems, vol 767. Springer, Cham. https://doi.org/10.1007/978-3-031-41637-8_5

Shawn

Hi everybody! I am Shawn, a digital student at George Washington University in Virginia, USA. I travel through the States on the GWU Mobile Leaning Center to show high-school students the world of healthcare professions and explore with them, what they could study at GWU. If you want to know more about that, read here!

The GWU Mobile Learning Center

As you might have noticed, I’m a football player! Since the GWU Mobile Learning Center mainly focuses on high-school age participants the setting in a football game gives a familiar starting point for the simulation and allows them to better visualize themselves in a situation they know very well. During a football match against a rival high-school I get injured on the field. Nothing too bad, don’t worry! But it hurts a lot, so I need the participants help to get proper medical care as soon as possible!

My avatar has also been carefully designed for its purpose. I had to be as relatable as possible for the teenagers I meet on my travels, whilst also pursuing other goals of the GWU Mobile Leaning Center, like inspiring diverse students. To make the simulation realistic, I’ve been given a huge variety of expressions of pain, discomfort and nervousness. I’m sure the animators of the e-REAL team has a blast!

Now you are probably asking yourself, what is this GWU Mobile Learning Center he keeps talking about?

Well, the GWU Mobile Learning Center is a giant truck equipped with all the newest educational technologies and e-REAL interactive walls that travels the States from high school to high school to teach about healthcare professions and further education offered by GWU. It’s the first Mobile Learning Center of its kind! Adapting interactive technologies to a compacted and movement friendly setting has been a fun challenge for the e-REAL team. Nobody had ever thought about the logistics of putting interactive walls on a truck until GWU asked e-REAL to do just that. And what a great success it was! Interactive walls are very adaptable and can be customized for specific situations, from very big to very small spaces.

Exploring the healthcare field

Now, as I have said, the Mobile Learning Center is an educational simulation. Following my journey from the field through all the exams at the hospital allows the students to see as many different healthcare professions as possible. From my trainer, the first person that runs to me, to the ambulance that takes me to the hospital, all the nurses, doctors and surgeons, as well as x-rays technicians and physiotherapists, I give a great insight into the medical world!

But e-REAL hasn’t adapted interactive wall to fit on a truck just to give an insight into healthcare professions. There is so much more to it! The Mobile Learning Center provides the students with an innovative experience and allows them to try new technologies. The best educational technologies from interactive walls to virtual reality have been installed to the truck to allow the participants to take part in fun and educational activities.

They get a chance for example, to play minigames where they must sort the medical instruments of my surgical operation or put together a puzzle of my exam results. Edutainment has been proven to be particularly effective with younger students and the ludic part of the Mobile Learning Center was specifically designed by educational experts with high-school age participants in minds to allow them to learn whilst also having fun. Here’s a video on some of the professions and educational activities that can be found on the GWU Mobile Learning Truck experience!

Reaching and representing diversity

The project of the Mobile Learning Center was developed to reach underprivileged highschoolers with innovative technologies and inspire them to pursue higher education in the healthcare sector. To make the simulation more relatable for the students and show a representative example of how a diversified healthcare force looks like, e-REAL has paid a lot of attention to details not only in my design, but in the creation of all the digital humans that populate this simulation. From choosing a large variety of American accents for the voiceovers of the avatars to realistically reproducing the physical characteristics of different ethnicities, the healthcare professionists of the Mobile Learning Center are an excellent example of how digital humans can be used for diversity and inclusion not only in the workplace, but also in education.

And the realism doesn’t stop there, because for educational purposes every aspect of the simulation including medical instruments, settings and procedures have been kept true to form thanks to the collaboration of the experts at GWU. Some of the professors also make a special appearance as digital humans inside the simulation along with the very beloved GWU mascot – can you spot him in the picture? – but that’s a story for another time!

I have to go to practice now, if you have any more questions, I’m sure some of the other digital humans of the Mobile Learning Center would love to give you their opinion too. For now, the DigiBlog will wait for you next week with a new character. See you!

Matt

Hi everybody! I am Matt, the Real Time Avatar. I’m a digital human capable of replicating human emotions in real time whilst also being highly customizable, so I’m often used for training in Diversity & Inclusion. If you want to find out how I am employed and what my special skills are, keep on reading!

Digital humans research: An intelligent avatar

Unlike many of my digital human colleagues, there is not much to say about my background story because I do not belong to a specific scenario but are rather more of a flexible and customizable solution. I can be a doctor, a patient, a job candidate, and so much more!

I can proudly say, I’m a product of research. Centro Studi Logos developed us Real Time Avatars based on extensive research on what was wanted from digital humans and what could give them the greatest variety of uses. In fact, even though we’ve been primarily deployed to train employees in Diversity & Inclusion, we could potentially be used in any field! And since I’m always evolving and upgrading, who knows what the future holds for me. I’m sure since artificial intelligence has developed amazingly in the last years, I’ll get an upgrade or two in due time, you just wait!

But for now, I’ll keep to my research contributions. No, I don’t actually do the research you silly! I’m intelligent, but not that intelligent… My contribution is more of a representative, showcasing role. My Real Time companions and I – whom you might meet later on if I manage to convince them to appear on the DigiBlog, oh to have stubborn friends! – accompany Logosnet and e-REAL founders and managing partners Fernando Salvetti and Barbara Bertagni to many simulation conferences around the world, like SimOps and The Learning Ideas Conference. I’m not gonna lie, I do enjoy the travelling – I mean, how many avatars can brag about international appearances? – but what really makes the deal for me is the chance to be sort of an avatar ambassador, showing the world what us digital humans can do and how we can help you humans in many ways. And yes, maybe I also enjoy the attention of the public a little…

Real time emotions in a digital human

Now as I said, I can replicate emotions just like you humans. What’s so special about that, you ask? Well, thanks to the live inputs of my operator during training sessions, I don’t simply play a predefined script, but I can react to the trainees in real time. So, it’s just as if you were talking to a human, a digital human. If you say a bad joke, I will laugh – or not, I might get offended just to make it an educational time. If you hurt my feelings, I will look hurt. And “look” is the key word here, because I do not really have feelings, and so you can’t hurt me. But we will come back to this.

Anyways, I boast a lot about myself, but really, human emotions are so complex I would not stand a chance without my operator. Me on my own? Oh, that would be embarrassing… The operator can manage me through a control panel and influence a variety of small details including my emotional reactions, but also some extra stuff that I will show you later (I have to say, it’s really hard not to spoiler anything!). If you’re reading this and thinking that being an operator is hard and only qualified people can do it, don’t worry. Anyone can learn if they want to, we have a manual for that (no really, it’s 10 slides, anything that fits in 10 slides can’t be that hard, right?). If you don’t believe me, here’s my operator talking about his experience working with me!

Diversity & Inclusion with customizable avatars

I have tried my best, but I can’t keep the secret anymore… I have superpowers! No, not laser eyes or stuff like that, no, it’s not time travel either, ok maybe it’s not really a superpower, but one could say I shapeshift and space travel… You don’t believe me? Well, do you remember all those nifty settings my operator can change? They control the background that is shown behind me in the simulation, but also my poses, my clothes and some physical features like my skin tone. I can go from a Hispanic office worker to an African-American doctor in a few seconds.

As I said, there is a reason we’re heavily employed in D&I, it just makes sense! But it’s not just about looks. D&I training can be a very sensitive process both for the employer and the employees and before people learn how to properly address the issue, some of the participants might get emotionally hurt during the training. As an avatar, I have no emotions that can be hurt, and you can practice with me for as long as you need to. In fact, I’m a relentless educator and a pretty good motivator. I will not give up on you, just make sure the cable has access to power.

Still, even though it’s not about looks, looks matter too. Can human shapeshift and space travel? I don’t think so. And we all know that imaginative power for scenarios takes a lot of effort that could be better invested in the actual D&I training, so instead of asking your employees to imagine sketches to practice, or hiring a dozen different actors, you can just simulate all the scenarios you need with an e-REAL interactive wall and, well… me!

Now don’t come at me with “Matt, I don’t want to offend you, but no matter how much you switch your appearance, you can’t be really inclusive”, because I know. This is not a one man show! I like to call myself THE Real Time Avatar – sounds fancier, you know… – but I’m really part of a team and together I assure you, the combinations are endless. For example, sit me in a weelchair with an hospital background and I can be an injured patient, but do it in an office setting and you are testing your recruiters for biases. Just look at these examples!

Oh no, look at the time, I have a digital flight to catch! I hope you found this insight into Real Time Avatars interesting and the DigiBlog will wait for you next week with a new digital human. I’m really late. Better use those space traveling powers… Bye!!

Montgomery College Raptor

Hi everybody! I am the avatar of the Montgomery College Mascot in Maryland, USA: the Raptor. There I am part of a medical simulation scenario that helps students learn and train for their future job. If you are wondering what a digital human like me does and how a non-human avatar is created, keep reading!

What does a digital mascot do?

Now, before you worry that birds are taking over the blog about digital humans, lets clarify: can I be a digital human if I’m not even human? Well, if I’m not a digital human, then maybe I’m a digital mascot? That’s a whole new term… Better not to get lost in the technicalities, I’m antropomorphic, it’s close enough! Still, when you think medical simulation, a mascot is not the first thing that comes to your mind. So what am I doing on the e-REAL walls at Montgomery College? The simple answer is: I do what mascots do! This doesn’t tell you anything? Ok, let me explain…

If you’ve never taken part in a medical simulation and you’ve never seen an e-REAL interactive wall, you probably don’t know what to do with it. And here is where I appear and greet you. I not only represent my school in explaining what the simulation is about and why it was created, but I also help the participants understand how to properly take advantage of all the functions the interactive walls offer. Because if you thought that you just touch some digital buttons and it’s all going to work out, you are in for a surprise. We’re talking shared movable notepads, digital documents you can access, 3D movable medical tools, simulated medical test results and so much more!

I help, I guide, I cheer you on and motivate you when you are struggling and I make sure that overall your experience whith this medical simulation is great and educational. And if you are particularly lucky, my amazing charisma might prevent your professor from noticing that very stupid mistake you just made. Shhht, put it back and no one will ever know, Raptors word!

The animation of digital humans

But what is really so special about me, is how I’m made. If you think of mascots, most of you probably picture a human wearing a costume, so there is the outer costume layer and then the human inside. Digital humans have a double layer too, but not quite the same. There is an exterior, pshysionomical layer that defines the appearence, and then inside you’d find a very elaborate skeleton that allows them to move the different body parts. 3D modelers and programmers call it a skeleton, but they’re not the kind of bones you are thinking of: they appear as colored lines that represent the singular parts that can be moved independently.

No matter how elaborate the skeleton of a digital human is, the skeleton inside a digital mascot like me is on a whole other level, because oh wonder, humans don’t have beaks, tallons and wings they have to move! This bird has no standard skeleton, it’s all custom made! To create this intricate details of the Raptors costume and allow them to move and be part of the animation instead of just being strictly an aesthetic feature was a fun challenge for the e-REAL team, and they did great! I haven’t tested my wings yet, but I could probably fly if I wanted, I’m not just that sure that my digital bones are gravity defying… what do you think? Nevermind, I’ll test it later.

If you want to better understand how a digital humans’ inner skeleton works, here is a short video about the movement of my body.

Digital humans in medical simulations

As I told you, I’m part of a medical simulation used by students. Since digital humans don’t even have the same skeleton as you, you could wonder why practice medicine on them. First of all, a digital medical simulation is a good way to practice what actions to do in specific situations, like what tasks to do in what order, what utensils to use and so on. Being digital, it allows multiple people to take part totgether, making it ideal for a class, and since you don’t have to use real equipment because you have digital replicas of it, it can be used on actual patients. So it’s actually a smarter use of resources! But that’s not all, because the most important part of a medical simulation with digital humans are the interactions. It’s not just your classmate – who lets be real, isn’t the greatest actor you’ve ever seen – lying there on the gurney and pretending to be a patient, but an avatar that can be programmed to play a huge variety of different characters. And the e-REAL avatars not only perform actions and dialogues, but are also able to show emotions just like human patients do. This gives the students a chance to try themselves in very specific situations with maybe particularly hostile or not very communicative patients, and so practicing for future real life situations. Since this situations tend to always be stressful, when they will happen, I think the students will be very happy to have had a chance to pratice before and this will give the patient better quality care. This is why I’m so proud to be a digital mascot: with my work I get to really help the students be the best they can!

I think I’ve talked enough for today, I hope you learned something interesting, so that I can say that I’m not only educating students at Montgomery College anymore, but you too! The DigiBlog will wait for you next week with a new digital human, don’t miss it!

Count Ettore Arrigoni degli Oddi

Hi everybody! I am the avatar of Count Ettore Arrigoni degli Oddi and you can meet me on an e-REAL interactive wall at the Civic Museum of Zoology in Rome. How I ended up as a digital human in a museum is a long story, but keep reading if you want to find out more!

Why did Count Arrigoni become a digital human?

Count Ettore Arrigoni degli Oddi was an historical figure that lived in Italy between 1867 and 1942. He – or should I say I? The identity crises an real-person avatar has to go through when they are you and you are them… I’m going to say “I”, because I am him, just digital! Anyway, I was one of the founders of italian ornithology and owned a huge stuffed birds collection that is hosted today in… guess where? The Civic Museum of Zoology in Rome!

Yet, usually museums talk about people like me, they don’t actually bring us back to life, so how did I come to be? The Civic Museum of Zoology has a great responsibility in educating the youth that visits them very day, but lets be honest, who as a teenager was ever interested in reading through the endless panels of a museum? And you won’t believe this, but someone suggested that a room full of stuffed dead animals and tiny hooks and scalpels to skin them could be considered a little bit creepy… No no, I’m not the one that said that, I love my birds! But the museum decided that in order to keep in touch with the very youth they were trying to educate, it was time for an update, and so with funds from an EU digitalization, education and culture project, and the help of e-REAL they redesigned several of their exposition rooms, including mine.

But why did I become an avatar? The simple truth is, you might not want to read through a panel about me, but you are going to love talking to me! While you stroll through the room with all my old cabinets and utensils, I can tell you about them and how they are used and about how even though they seem creepy and part of an outdated practice, these stuffed animals have a key role in ornithology. Do you know how a certain bird species looked 100 years ago? No? Well niether would today’s ornithologists if someone hadn’t made a stuffed specimen, and this kind of knowledge is essential in order to understand how bird species are impacted by the modern world – yes, yes, you can thank me later. Look, that’s me and my flesh version, uncanny resemblance, don’t you think?

The power of an interactive wall in a museum

Now you might ask, if it’s just about hearing me blab about my dear old birds, wouldn’t a video with a good actor have been enough? Well, would you stop and whatch a dressed up actor or do you want the original?! But jokes aside, my interactive wall is so much more than just me talking, it’s a full experience that each visitor can custom to their likings, deciding in real-time what they want me to talk about and creating their own path through the topics I offer.

And a very important part of digitalization is that it’s not just about the visitors! The interactive system allows the museum to keep track of what are the public’s favourite topics, finding out what they want to know about more and having a chance of incorporating that knowledge into the future experiences. Well, if that isn’t a modern museum at it’s peak, I don’t know what is!

And it’s also not just about the avatar that lives in it, the rest of the scenario is great too! You can enjoy a detailed and realistic replica of my office, completly immersing in the feeling of what might have been being an ornithologist at the time. How detailed you ask? Well, I’m not supposed to tell you this, but in the last of their tests the e-REAL team discovered a small light switch hidden in a dark corner. No one would have seen it, but we can’t have that in a 19th century scenery, can we? It totally ruins the vibe – pardon me, but I learned some expressions while I was in the 21th century. When I realized that I would be spending 24/7 in my scenery I insisted on wanting my original working space and not a half-assed, vaguely inspired 19th century office. And so they dived in the archives for original pictures – and trust me, they had to dive deep because they are in the basement of the museum – found a few black and white, dusty photos and, well, they delivered! It feels just like home! You can take a look at how my studio and I were created in this short video.

3D modeling historical details

But replicating a space from pictures, that’s everyday stuff for the e-REAL team, what actually makes me so special, is that I’m a historic figure and it wasn’t just about creating an avatar, but about bringing back to life an exact copy of someone that had existed. In English you might not notice it, but when I talk to you in Italian I have an accent of the Paduan province where I was born, and trust me, to find a voice actor with that accent must not have been easy because italian regional accents vary so much even just inside the same region. But if the avatar didn’t talk like me, it wouldn’t have been me!

It doesn’t stop there… Have you ever wondered how an avatar is dressed? In videogames you usually have those nice virtual closets where you can scroll until you find the perfect fit, in 3D modeling… well yes, and no. There are existing 3D models that can be used as a base for avatar clothes, but you know what the most popular eras are, where a lot of models exist? The Middle Ages, the Stone Age – yes, that was a surprise – and 19th century italian ornithologist? Not so much. So my clothes and accessories had to be created almost from scratch based on a few historical pictures, that’s what I call digital tailoring! Could you imagine yourself being a tailor to a digital human? Now that’s an unconventional job to tell your friends about! If you want to find out more about how 3D modeling for a historic digital humans works, here’s an interview with the project manager and computer development expert that worked on the project.

Now, as much as I love talking about me and my stuffed birds, I think it’s time to wrap it up. The DigiBlog will wait for you next week with a new digital human, don’t miss it! As for me, it’s been a pleasure to talk to you and thank you for sticking to the end, see you in Rome if you ever come to visit me!