3 Mistakes to Avoid When Incorporating AI into Your Company
The following article is from INC.com, one of the most prestigious publications in the United States covering the issues of SMEs in the fields of leadership, marketing, innovation, HR, and more. The author of this article is Chenault Taylor, a director at BCG and an ambassador for the BCG Henderson Institute.
This is why you should treat generative AI as a new member of the team, not just another technological solution.
Leaders are increasingly talking about generative AI as a teammate, rather than a tool.
However, most organizations still approach its adoption as if it were a technological deployment.
They implement GenAI, offer basic training, and expect a revolution in productivity that probably won’t materialize.
Despite the excitement surrounding GenAI, the gap between the technology’s promise and its real value remains large.
This isn’t about faulty tools or reluctant employees, but rather how leaders are introducing GenAI and preparing their organizations and staff to work with it. Treating GenAI as a team member isn’t just a clever metaphor. It’s a leadership imperative.
GenAI needs to be integrated, trained, and given room to grow.
At the same time, leaders must create space for their team to learn how to collaborate with it productively and enjoyably, and ensure that incorporating GenAI improves the entire team, not just accelerates its pace.
Here are three specific mistakes to avoid and what to do instead.
Mistake 1: Prioritizing Productivity Above All Else
When companies think about GenAI, the first, and often the only, thing they mention is productivity. Automate this. Accelerate that. Do more with less.
But if you stop there, you’re underutilizing GenAI’s potential and risk creating fear, disengagement, and low adoption. Our research shows that the more people use GenAI, the more they trust it—but also the more they fear what it might mean for their jobs.
Instead of asking «How much can we leverage GenAI?» ask yourself the same questions you would ask new employees: What are their strengths? Where will they need support? How can we integrate them into the team so everyone can perform at their best?
Seen this way, GenAI focuses less on replacement and more on rebalancing. It’s the teammate who takes on repetitive or frustrating tasks so their employees can focus on what energizes them: creative problem-solving, mentoring, collaboration. GenAI not only increases performance, but can lead to better, more meaningful, and more enjoyable value delivery for the entire team.
Mistake 2: Overlooking the Employee Experience
Too often, companies introduce technology without involving employees.
The best organizations do the opposite. They co-create the implementation with employees, asking themselves: What aspects of their jobs drain them? Where do you think GenAI helps alleviate drudgery? What do you wish you had more time for?
Co-creation can include ongoing touchpoints with employees—through advisory groups, surveys, and one-on-one conversations—as well as iterating on solutions with them based on the tools’ use and impact.
Leaders learn from this process and use the insights to implement GenAI in ways that reduce effort and increase satisfaction.
This is important: our research shows that employees who enjoy their jobs are half as likely to seek employment.
When GenAI is introduced through a co-created, employee-centric approach, we’ve seen four times higher usage rates and a 13% increase in overall employee satisfaction at work.
A major factor in these improved usage and satisfaction results is the influence and support of managers. They should set the tone, use the tools themselves, and openly model what good integration looks like with this new teammate.
Mistake 3: Treating GenAI as a replacement, not an advancement.
If you’ve «hired» GenAI, it’s not just an immediate replacement for existing roles.
It’s an entirely new capability. Imagine adding someone to your team with the ability to instantly recall a billion pieces of information, deep analytical skills, and zero ego.
You wouldn’t ask them to simply take care of your to-do list or anyone else’s. You’d rethink what your team could do with those superpowers.
That’s the opportunity with GenAI. But it involves reimagining workflows, redistributing responsibilities, and sometimes even restructuring teams; employees may end up working with new members or learning new skills. You must design the entire team’s work around GenAI’s strengths and, equally important, consider its limitations.
Many organizations fall short of transformations of this magnitude because it requires a large investment, but truly changing the way teams work together can also have a huge impact.
Ultimately, you wouldn’t hire a high-potential individual and expect them to immediately start performing without guidance, context, or support.
GenAI is that skilled new hire, so implement it correctly. In return, it will help you drive innovation, improve the employee experience, and boost performance and productivity.
Generative AI Won’t Build Your Engineering Team for You
The following post is from the Stack OverFlow portal, which defines itself as: We empower the world to develop technology through collective knowledge. Our products and tools allow people to ask, share, and learn at work or at home.
The author is Charity Majors, co-founder and CTO of honeycomb.io, a leader in observability for complex software systems. She has worked as an operations engineer and engineering manager at Parse, Facebook, Linden Lab, and other companies.
Generating code is easy, but generating good code isn’t so easy.
When I was 19, I dropped out of college and moved to San Francisco. I had a job offer to be a Unix systems administrator at Taos Consulting.
However, before my first day on the job, I was persuaded to join a startup in the city, where I worked as a software engineer on email subsystems.
I never questioned whether I’d find a job. There were plenty of jobs, and more importantly, hiring standards were very low.
If you knew how to use HTML or navigate a command line, you were likely to find someone who would pay you.
Was I some kind of genius, born with my hands on a computer keyboard? Of course not! I was homeschooled in a remote part of Idaho. I didn’t touch a computer until I was sixteen, when I was in college.
I ran away to college on a classical piano scholarship, which I later traded for a series of non-technical, peripatetic careers: classical Latin and Greek, music theory, philosophy. Everything I knew about computers I learned on the job, as a systems administrator for the university and computer science departments.
In retrospect, I was incredibly lucky to get into the industry when I did. It makes me shudder to think what would have happened if I had arrived a few years later. All the stepping stones my friends and I took to get into the industry are long gone.
The software industry is growing.
To some extent, this is what happens as an industry matures.
The early days of any field are like the Wild West, where the stakes are low, regulation is nonexistent, and standards are nascent.
If we look at the early history of other industries (medicine, film, radio), the similarities are striking.
There’s a magical moment in any young technology where the boundaries between roles are porous, and anyone motivated, curious, and willing to work hard can seize the opportunity.
It never lasts. It can’t; it shouldn’t. The amount of knowledge and experience required to enter the industry grows precipitously.
The risks increase, the magnitude of the mission increases, and the cost of mistakes skyrockets.
We develop certifications, training, standards, and legal rites. We debate whether software engineers are really engineers.
Software is a learning industry.
These days, you wouldn’t want a teenage high school dropout like me to finish junior year and end up on your pager rotation.
The prerequisite knowledge required to enter the industry has increased, the pace is faster, and the stakes are higher, so you can no longer literally learn everything on the job, as I once did.
However, it’s also not possible to learn everything you need in college.
A degree in computer science often prepares you better for a life in computer science research than for life as a software engineer.
A more practical route into the industry can be a good coding bootcamp, with an emphasis on problem-solving and learning modern tools.
In either case, you’re not so much learning «how to do the job» as learning «enough of the fundamentals to understand and use the tools needed to learn the job.»
Software is a learning industry. You can’t learn to be a software engineer by reading books.
You only learn by doing… and doing, and doing, and doing even more. Regardless of the training provided, most learning happens on the job, period. And it never ends!
Learning and teaching are lifelong practices; they have to be; the industry changes so quickly.
It takes more than seven years to develop a competent software engineer
(or as most career ladders would call it, a «senior software engineer»).
That’s many years of writing, reviewing, and deploying code daily, on a team alongside more experienced engineers. That’s precisely how long it seems to take.
What does it mean to be a «senior engineer»?
This is where I often receive indignant criticism about my timelines, for example:
«Seven years! Pfft, it took me two years!»
«I was promoted to Senior Software Engineer in less than five years!»
Good for you. It’s true that there’s nothing magical about seven years. But it takes time and experience to mature and become a seasoned engineer, the kind of engineer who can be the backbone of a team. More than that, it takes practice.
I think we’ve come to use the term «Senior Software Engineer» as shorthand for engineers who can deliver code and have a net positive impact in terms of productivity, and I think that’s a serious mistake.
It implies that less experienced engineers must have a net negative impact in terms of productivity, which is false. And it elides the true nature of software engineering work, of which writing code is only a small part.
For me, being a senior engineer isn’t primarily about the ability to write code.
It’s much more about the ability to understand, maintain, explain, and manage a large amount of software in production over time, as well as the ability to translate business needs into technical implementation.
Much of the work involves creating and managing these large, complex sociotechnical systems, and code is just a representation of these systems. What does it mean to be a senior engineer? It means having learned how to learn, first and foremost, and how to teach; how to hold these models in your head and reason about them, and how to maintain, expand, and operate these systems over time. It means having sound judgment and instincts you can trust.
Which brings us to the topic of AI.
We need to stop cannibalizing our own future.
It’s very, very hard to get your first engineering position.
I didn’t realize how difficult it was until I watched my younger sister (recent graduate, excellent grades, some practical experience, a tireless worker) struggle for almost two years to get a real job in her field.
That was a few years ago; anecdotally, it seems to have gotten even harder since then.
Last year, I read a steady stream of articles about entry-level jobs in various industries being replaced by AI.
Some of which are entirely valid. Any job that involves heavy lifting, like converting a document from one format to another, reading and summarizing a bunch of text, or replacing one set of icons with another, seems pretty vulnerable.
This doesn’t strike me as all that revolutionary; it’s simply extending the current automation boom to encompass both textual material and math.
Being able to replace the work of junior engineers
However, recently, several executives and so-called «thought leaders» in the tech industry seem to have convinced themselves that generative AI is about to replace all junior engineers’ work.
I’ve read countless articles about how junior engineers’ jobs are being automated out of existence, or about the diminishing need for junior engineers. It’s driven me crazy.
All of this reveals a profound misunderstanding about the true work of engineers. By not hiring and training junior engineers, we are ruining our own future. We need to stop doing this.
Writing code is the easy part
People act as if writing code is the hard part of software. It isn’t.
It never has been, and it never will be. Writing code is the easiest part of software engineering, and it’s getting easier every day.
The hard part is what you do with that code: operating it, understanding it, extending it, and managing it throughout its entire lifecycle.
A junior engineer starts by learning to write and debug lines, functions, and snippets of code. As you practice and progress to a senior engineer, you learn how to build systems from software and guide them through waves of change and transformation.
Sociotechnical systems are made up of software, tools, and people.
Understanding them requires familiarity with the interaction between software, users, production, infrastructure, and continuous change over time.
These systems are incredibly complex and subject to chaos, lack of determinism, and emergent behaviors.
If someone claims to understand the system they are developing and operating, the system is either exceptionally small or (more likely) doesn’t know enough to know what it doesn’t know.
In other words, coding is easy, but systems are complex.
The current wave of generative AI tools has helped us tremendously in generating large amounts of code at high speed.
The easy parts are getting even easier, at a truly remarkable rate. But it hasn’t contributed at all to the management, understanding, or operation of that code. If anything, it’s only made difficult tasks more difficult.
Generating code is easy, but generating good code is difficult.
If you read a lot of passionate opinion pieces, you might imagine software engineers happily creating prompts for ChatGPT or using Copilot to generate reams of code, pushing what comes up to GitHub, and then discarding it. That doesn’t correspond to our reality.
The correct way to think of tools like Copilot is more like a sophisticated auto-complete or copy-paste feature, or perhaps the perfect combination of Stack Overflow search results and Google’s «I’m feeling lucky.» You always have to take risks.
These tools work best when there’s already a parallel in the file and you just want to copy and paste with slight modifications.
Or when you’re writing tests and have a giant block of fairly repetitive YAML, and it repeats the pattern while inserting the correct column and field names, like an automatic template.
However, you can’t trust the generated code.
I can’t emphasize this enough. AI-generated code always looks plausible enough, but even when it works, it rarely matches your wants and needs. It will generate code that isn’t parsed or compiled. It will invent variables, method names, function calls; it will hallucinate nonexistent fields.
The generated code won’t follow your coding practices or conventions. It won’t refactor or create intelligent abstractions.
The more important, difficult, or meaningful a piece of code is, the less likely you are to generate a usable artifact with AI.
You may save time by not having to write the code from scratch, but you’ll have to go through the output line by line, revising as you go, before you can commit your code, let alone ship it to production.
In many cases, this will take as much or more time than simply writing the code, especially today, now that autocompletion has become so smart and sophisticated. Making AI-generated code compatible and consistent with the rest of the codebase can be a huge effort. Frankly, it’s not always worth it.
Generating code that can compile, run, and pass a suite of tests isn’t particularly difficult; the difficult thing is creating a codebase that many people, teams, and succes