Tell us a little bit about yourself, your background, and your current role at LinkedIn?
As an AI product manager at LinkedIn for nearly five years now, I've had the privilege of working across various business lines, including software as a service, media, and consumer segments. My experience has given me a deeper understanding of the intricacies of technology company operations. At the moment, my focus is on developing artificial intelligence applications to enhance LinkedIn's go-to-market strategies, specifically targeting sales, customer support, and other customer touchpoints for technology and enterprise companies.
One of the biggest concerns around advancements in AI is the potential for misuse. How do you see innovation combating fraud?
It's one of the most terrifying changes that I think AI will bring about. Scammers are already exploiting AI, an example of this might be mimicking voices to deceive vulnerable groups. Companies like Veriff and Clear are tackling this, aiming to keep the internet safe. Trust is crucial, whether that's in online transactions or identity verification. We pride ourselves on being voted the most trusted social network on the internet. As AI evolves, so do fraudsters' tactics, but innovation, like online identity verification, offer hope in the fight against fraud.
Are you seeing more user caution on social platforms in light of rising fraud online numbers?
Definitely. Users, rightly, demand value and trust. Even in financial services, like Wealthfront, referrals balance trust and convenience. Trust varies, from robust verification like Veriff to anonymous platforms. The level of trust needed depends on the platform's purpose. For instance, at LinkedIn we need high levels of trust due to the sensitivity of job applications. It's up to leaders to navigate this trust spectrum.
What predictions do you have for LinkedIn and the various products that are developing there but also in the wider AI space?
I’ll start with the predictions that I am most certain of, firstly, I think identity is going to become digital-first, making physical IDs redundant. Secondly, education will transform as learning becomes more personalized. Thirdly, the AI arms race will stabilize as cloud providers offer standardized models. Finally, I'd like to think digital doubles will enhance productivity, enabling personalized AI assistants for everyone.
How do you ensure a human-centric approach to product development?
In the past, product development involved controlling user experiences through design. With AI, we lose that control as every user interacts with platforms differently. This makes human-centered design more challenging. To tackle this, clarity on product principles is crucial. There's a fantastic executive at LinkedIn named David Vombray, who works on our business development and partnerships team, he has a great way of thinking about principles: think about all of the worst case scenarios and what you would need to do to avoid those from happening. It's very similar to Charlie Munger's Thinking by inversion principle.
At LinkedIn, our number one principle is responsible AI: ensuring our AI is vetted to prevent harmful outcomes. To summarize, I would say to ensure human-centered design, you must define clear product principles and then implement safeguards for AI alignment.
Practically, how do you implement these principles, especially across diverse teams like UX design, developers, and business stakeholders? Is a feedback loop the key, and if so, how do you ensure it's effective?
I believe that no user actually cares about 90% of the technology products they use. They care about the objectives a product helps them achieve rather than the product itself. At LinkedIn, we're fortunate to have a team dedicated to representing user interests. For example, consider a small business using LinkedIn ads. They just want leads and revenue, not the complexities of marketing. So we launched Accelerate, simplifying ad creation through AI. Each team involved in this process has its own feedback mechanisms. Clear goals, tracking, and measurement ensure insights are gathered and improvements are made iteratively. They say it takes a village, and it really does, but aligning on what insights to look out for and how to measure them allows us to improve continuously.
How do you balance speed, agility, and ethics in product development?
It starts with understanding the incentives driving decisions. Individuals prioritize their own incentives, which then ripple outwards. A startup may prioritize rapid feature releases to attract investors, while a large corporation might prioritize stability due to repercussions of making changes. Clear principles guide decision-making, think of it as a pre-flight checklist: does this meet responsible AI? Does this earn the trust of our members? Is this a simple design? All of these attributes ensure that the product you're shipping is of quality. Every Product Manager has their own secret sauce for what is the prioritized set of principles that matter to them. My top two are safety and responsible AI. Ultimately, the goal is to ensure products are tested and measured against user expectations before release.
Do you think frameworks and ethical considerations are necessary in AI?
Definitely. AI isn't always the answer; sometimes, deterministic technologies are better. Learning from academia and industry is vital as tech evolves rapidly. For example, our chatbot project had to adapt when new AI tech emerged. Google's challenges highlight the importance of navigating AI ethically. We're staying informed, collaborating, and focusing on ethical AI practices.
Please note that the views expressed in this episode belong to Zane and not the organization - LinkedIn. The interview has been edited for length and clarity.