Meet Communications@Syracuse’s Content Fellow Steve Masiclat

Share: Facebook | Twitter | LinkedIn

Content Fellow: Steve Masiclat

Communications@Syracuse offers content fellowships to lead professors and senior-level communications professionals who teach in the online program. These fellowships provide faculty with the opportunity to share their expertise in a variety of communications-related topics including PR, advertising, journalism, branding, marketing and digital media. 

We’re excited to introduce our first content fellowship recipient, Steve Masiclat. He directs two graduate programs—New Media Management and Computational Journalism— and teaches in the online Master of Science in Communications program at Newhouse. Masiclat’s experience includes interface design, programming and art direction. He also served as a captain in the Marines where he commanded infantry platoons. During the last 20 years, his research and teaching interests have evolved along with the digital communications field. Today he is a published researcher in the field of artificial intelligence and advises Wall Street investment firms on advanced web technologies. 

 

We sat down with Masiclat to learn more about his background and interests, and to get an idea of what’s in store for his content fellowship.

You’ve had a unique career—serving in the marines, programming for Macintosh, and working as a graphic designer. How did this come about?

As an undergraduate, I started as an art student, but I came from a family of engineers, so I took a lot of mathematics and computer science courses. Military service was something my family was steeped in—my Filipino grandfather was a captain in the 26th Cavalry and Philippine Scouts, and my British grandfather was in the Royal Navy—so while I was in college I enlisted in the Marine Corps and was commissioned when I graduated.

While I was serving, computers were just being introduced into the front-line units, and they were completely different from the computers I had used to learn programming. The old IBM punch cards were gone, and there were easy command line prompts to launch specialized applications. Soon after that we started seeing the new graphical user interfaces (GUIs) being pioneered by people like Alan Kay and Bill Atkinson, and that allowed the application of computation to areas like art and design. By the time I resigned my commission, I knew I had to study the new style of computing. So in 1991 I started an interdisciplinary grad program at Cornell University that allowed me to study graphical communication on computers.

Cornell had a research lab devoted to developing advanced GUIs for new applications—the Interaction Design Lab run by Dr. Geri Gay. In the middle of my first year Dr. Gay hired me as a grad research fellow because I could program and I had art training. She started me on the path to doing user interface/user experience (UI/UX) research and development in a university setting.

What got you interested in virtual and augmented reality?

In 1992, as a graduate researcher, I attended the SIGGRAPH conference in Chicago where Jaron Lanier demonstrated the first commercially available VR system. His company, VPL Research, was interested in wider application of the VR rig, and they contracted with the lab I was working in to explore instructional uses of VR interfaces using the VPL system, especially the Nintendo Power Glove interface. In the end, VR didn’t have the bandwidth and speed to facilitate learning the complex concepts we were studying, so we ended that project after about eight months. The new VR rigs are, of course, more powerful and user friendly, but I remain very skeptical of their value beyond game environments.

On the other hand, augmented reality seems to have clear advantages and much wider applicability. I think of AR as the addition of a “value layer” on top of experiences of the real world. I also believe journalists have been in the business of creating these real-world value layers for a long time. A data value layer is like a lens that helps you see new details in the world, so for example, a set of reviews from a food critic lets you look at a street full of restaurants through the lens of “quality food and service.” Imagine being able to look at a restaurant—or any business—as you walk down a street and see, floating in your field of view, a graphic representing professional reviews or a crowd-sourced rating of that restaurant. 

What kinds of experiences should leverage VR instead of AR?

I think VR is built for gaming. Immersion isn’t just sensory experience, it’s also a cognitive state that is best facilitated by game-type dynamics. One of the research projects we did at the Interactive Multimedia Lab at Cornell explored the use of video to create immersive environments to enhance second language learning. We found that visual and audio stimulus wasn’t enough to create what Mihaly Csikszentmihalyi calls the “flow state” of total immersion. We discovered that you also had to have a particular frame of mind—“a role and a goal”—to give coherence to your behavior and to help make sense of your experiences in a virtual world. That’s what VR needs to provide, a cognitive context and a reason to be in a virtual environment. Games intrinsically create these roles and goals, and that is what motivates people to enjoy virtual worlds and spend meaningful time in them. 

What got you interested in the Internet of Things (IoT)?

In 2006 I was an academic fellow with IBM’s Global Innovation Outlook, and in that year one of the topics we explored was the emerging use of very large data sets to discern new patterns and generate new insights. The work we did in 2006 contributed to IBM’s “Smart Cities” initiative. I saw the power of connecting devices like street lights and gas chromatographs to the internet—allowing cities to manage both traffic flow and pollution levels through networked sensors and “smart grids.” The next step was to imagine how else we could use devices connected in a massive data network to make the world a better, smarter place.

What do you regard as the most interesting use of the Internet of Things?

Let’s have a moment of brutal honesty. If you ask me what my most favorite topic in the whole world is, I’d have to say it’s me.

In fact, I think for most people their self is, if not their favorite topic, then certainly amongst the top three. In that context, I think the most interesting use of the IoT is in measuring every-day life—generating the big data that allows us to discern our own hidden patterns and behaviors, and develop a sense of our emerging selves. The sensors that comprise the IoT are many and varied; they include atmospheric sensors, electrochemical gas sensors and electromagnetic signal sensors.

The data we can gather is an untapped resource that has the potential to help us live better. For example, I have a nephew who was born with a condition that afflicts hundreds of thousands of people, an extreme sensitivity to sunlight and a propensity for malignant melanoma. He has to avoid direct sunlight, but he’s vulnerable to invisible light like UV that can refract from many areas. Interestingly, he’s even vulnerable to UV radiation at night because high-radiation sources like mercury vapor lights generate a lot of UV radiation. Clothing with integrated UV sensors that could process the data on his cumulative UV exposure and warn him when it was excessive would literally be life saving for him. That’s just one example, but it shows how there’s a lot of data to be gathered to make our lives safer, more efficient and more productive.

What got you interested in artificial intelligence?

Two years ago I was approached to join an inter-disciplinary team of researchers to help develop an online repository for software to assists. The goal was to design a global system that connected people who needed software to technical experts who either had developed, or could develop, code. Early in the process we discovered that basic search algorithms were not very useful because the two sides of the problem had extremely different ways of communicating. For example, a person running a large food pantry might say “we need software to help track donated food and move it out of our pantry before it spoils,” but a coder might archive software for “an inventory velocity manager for perishable warehousing operations.” To an expert, those two phrases describe a perfectly matched problem and solution, but to a search engine they have no words in common, and therefore are not a match. Our team used an AI algorithm in concert with a new user interface design to solve that matching problem. That got me wondering about using AI to solve other communication problems, especially in journalism.

How has “new media” evolved since you started in the industry?

Wow—how hasn’t it evolved?! I will always call it new media because if you work in it, it’s always new. In 1995 when I joined the faculty at Syracuse University, making a website meant writing the code for every web page, animating objects frame-by-frame, and often compromising images by compressing them down to a web friendly size. Today, interfaces are as likely to be an AI chatbot or a device like an Amazon Echo, as they are graphical websites. That said, I think the biggest change has to be the degree of connectedness everyone experiences. We are connected to an inhumanly large social network that operates at inhuman speed. There are many examples of how people use—and misuse—the power of social networks. We can literally share our thoughts and our eyewitness accounts with the world, and that’s both good and bad. Our mistakes and stupid comments can get us vilified by millions and fired in seconds. That’s a change I think people are still coming to terms with.

Follow Masiclat on Twitter @masiclat and read his fellowship pieces and other communications related stories on Twitter @SyracuseComm.

Share: Facebook | Twitter | LinkedIn