Deconstructing the AI Hype: A Developer's Journey Through Large Language Models and Code Generation

February 5, 2025 (5mo ago)

Deconstructing the AI Hype

This is a different blog and I will try to keep it short while sharing what I have learned from my recent AI experiment working with large language models (LLMs) and their practical applications in software development.

Watching all these promotional campaigns with "AI! AI!" everywhere - from smartphone brands leveraging neural processing units (NPUs) to applications claiming revolutionary machine learning capabilities - the 2025 AI hype feels surreal. So I thought, why not test these transformer-based models myself?

Understanding AI in Development Context

Modern AI coding assistants primarily use Large Language Models (LLMs) - sophisticated neural networks trained on massive datasets. These models, built on transformer architecture, excel at pattern recognition and generate contextually relevant code through autoregressive prediction.

Key concepts include tokenization, attention mechanisms, context windows (the information the model can "remember"), and fine-tuning on coding-specific datasets.

Developer working with AI tools

The Experimental Setup

I never had a portfolio website, and I knew basics of React and GSAP. Perfect opportunity to leverage generative AI for full-stack development from scratch.

hero section page

The project is live: aigeneratedfolio.projects.askvishal.in
GitHub repo: github.com/rajput-vishal01/ai-generated-portfolio

I selected Cursor with Claude 3.5 Sonnet for its superior reasoning capabilities and larger context window.

The Development Process

My prompt engineering approach: I used GPT to create detailed prompts explaining exactly what I needed - like a 3D spherical model in the hero section, how the skills section should look, what animations and libraries to use, etc. This meta-prompting approach (using AI to generate better prompts for another AI) exemplifies current AI-assisted workflows.

The Lucky Glitch

Here's what I don't know whether to call luck or a funny incident: After exhausting my 100 trial prompts, I logged out and logged in with another account. Suddenly, Cursor prompted me to Pro (their premium package), giving me access to 450+ more prompts! This glitch allowed me to experience the tool in much greater depth than originally planned.

Initial Success and Critical Failure

For the first 30 minutes, I witnessed impressive automated code generation: component scaffolding, state management, and UI construction. However, after extensive iterative prompting, a critical failure occurred: context degradation.

The AI experienced "semantic drift" - losing coherence about the project's intent and generating non-deterministic outputs. The root cause? Context window overflow from too many conversational turns, leading to:

Recovery and Code Quality Issues

Through explicit context reinjection (re-explaining requirements 10+ times), I restored the AI's understanding. Total time: Nearly 4 hours.

The generated codebase exhibited concerning patterns:

  1. Brittle abstractions: Minor modifications caused cascading failures
  2. Phantom dependencies: Unexplained coupling between modules
  3. Non-idempotent behavior: Identical operations producing different results

These issues stem from the AI's probabilistic nature - generating code based on learned patterns rather than deterministic logic.

Can AI Replace Human Developers?

Absolutely not - at least not with current transformer limitations.

However, AI demonstrates significant potential as a force multiplier. A traditional 10-developer team could potentially operate with 3-4 developers leveraging AI assistance.

AI coding illustration

The key insight: AI excels at pattern-based code generation but lacks architectural reasoning and system-level understanding.

My Current AI-Assisted Development Workflow

Based on this experiment, here's how I actually leverage AI tools:

1. Content Summarization and Research

I use AI for information distillation - parsing documentation, summarizing technical articles, and extracting key insights from complex codebases.

2. Component Enhancement and Tailwind Design

Rather than full "no-code" generation, I use AI for incremental improvements. I provide basic component structures and let AI enhance like adding Tailwind classes and responsive design patterns. I then refine these according to my specific requirements.

3. Content Creation and Technical Writing

This blog post itself is AI-generated (with significant human oversight). As a developer, I'm not a natural content writer, but I can effectively prompt engineer and quality assure AI-generated content.

4. Code Review and Optimization

AI serves as an excellent static analysis tool - identifying potential performance bottlenecks, suggesting refactoring opportunities, and catching edge cases I might miss.

Developer debugging code

The Realistic AI Development Paradigm

The 2025 AI hype contains kernels of truth wrapped in marketing hyperbole. Large Language Models are genuinely transformative for specific development tasks, but they're augmentation tools, not replacement technologies.

The most effective approach treats AI as an intelligent pair programming partner - excellent at handling boilerplate and suggesting improvements, but requiring human oversight for architectural decisions.

Would I use AI tools again? Absolutely. But with proper expectation calibration and robust quality assurance processes.

The future of development isn't human vs. AI - it's human-AI collaboration optimized for respective strengths.


Want to dive deeper into modern web development? Check out my other
Related Posts: