For every engineer passionate about technology, their personal blog should be more than just a platform for publishing content; it's a proving ground for ideas, a living fossil recording the evolution of their tech stack. It honestly reflects our understanding, struggles, and breakthroughs at different stages. The architecture I'm about to dissect today isn't a design blueprint created in one fell swoop, but rather the final chapter of an evolution spanning several years and four acts of a technical drama. This story is not just about technology choices, but about my engineering philosophy as an engineer, constantly iterating on the path to autonomy, efficiency, and control.
Act One began with a compromise for convenience. I briefly lingered in the grandeur of WordPress but was quickly dissuaded by its bloated system. Subsequently, I embraced Hexo, enjoying three years of pure static writing. However, the price of this purity was a rigid process - every minor modification meant a complete local generation, submission, and deployment. This "developer-only" publishing experience gradually evolved from initial convenience to a fundamental pain point restricting my creative process, igniting the first spark of change.
Act Two was my hesitant steps from a front-end developer toward a full-stack mindset. I built my first real back-end prototype with Express.js, successfully implementing user authentication and dynamic article management (CRUD). This was a huge leap at the time, proving my ability to break free from static constraints. But this prototype quickly ran into two walls: a Docker image size exceeding 100MB and the performance level that was barely usable (each local request took over 100ms in the early stages). The emergence of these bottlenecks made me clearly realize that there was a huge gap between simply being "usable" and building a "good" system.
Act Three was the awakening of performance and efficiency. Faced with the problems exposed by the Express prototype, I stood at a new technical crossroads. After evaluating Java and Go, comparing syntax and post-build size, I chose the latter. This decision was a key turning point in the entire evolution. Go, after compilation, compressed the back-end service into an extremely small, single binary file, completely eradicating the stubborn problem of bloated images; its inherent concurrency performance and extremely low resource consumption perfectly responded to my thirst for performance.
Act Four, which is the final form we see today, is the culmination and sublimation of this long evolution. It not only completely solves the pain points encountered in all past stages - from the convenience of publishing to the lightweight deployment, to the performance of the service - but, more importantly, it builds a solid foundation for the future. Through deep integration of AI-assisted creation and seamless bilingual internationalization support, the creative experience provided by this architecture has surpassed the imagination of any previous stage.
This article is not a cold technical specification. I will take you through the stage of these four acts, deeply interpreting the "what," "why," and "how I was thinking" behind every design decision of this final architecture. This is not only a complete record of my personal technical growth but also the engineering philosophy that I hope to share with you: how to build your own, sustainable, evolving system.
After exploring static generation and back-end prototypes, the final architectural form must have high cohesion and low coupling, and be able to clearly reflect my core design philosophy. It's not a single, monolithic entity, but an organic ecosystem where multiple focused, efficient services work together.
From a bird's-eye view, the entire system is designed as a set of services running in independent containers, coordinated through a unified entry gateway. The core of this blueprint is separation of responsibilities and clear communication.
Core Gateway (Nginx): The throat of the entire system is Nginx. It's not just a web server, but also an intelligent traffic scheduling center and security barrier. All external requests, whether from readers or administrators, first arrive at Nginx. It's responsible for parsing the intent of the request and forwarding them precisely to the correct downstream service.
Front-End Service (Next.js Container): The container responsible for the user interface and experience. It hosts the public-facing blog.
Admin Back-End Front-End Service (React SPA): The admin back-end that I use for managing articles (CRUD).
Back-End Service (Go API Container): This is the "brain" of the system, a pure, headless API server. It's responsible for all business logic, data persistence, user authentication, and interaction with the external world. Nginx seamlessly forwards all requests explicitly directed to the API to this service.
External Dependencies (AI Services): Beyond the boundaries of this blueprint, there are third-party AI services. The back-end Go service acts as an internal gateway, the only component in the system authorized to communicate with these external AI APIs. The front-end application never directly calls AI services; all AI-powered functions must be relayed through our own back-end API.
Why does a blog need two "front-ends"? The core of this design lies in my deep understanding that "public readers" and "back-end authors" are two completely different users, whose core needs are even contradictory. Forcing them into a single technical framework inevitably leads to compromise. Therefore, I adopted a strategic separation architecture.
Public Blog (SSR/ISR - Server-Side Rendering/Incremental Static Regeneration): For public-facing blog posts, the most important metrics are first-screen loading speed and search engine friendliness (SEO). Next.js's server-side rendering (SSR) and incremental static regeneration (ISR) strategies are designed for this purpose. It can generate complete HTML pages directly on the server-side or generate static pages at build time and update them on demand in the background. This means that readers and search engine crawlers can receive meaningful content at the first moment, achieving the ultimate performance experience and the best SEO results.
Admin Back-End (CSR - Client-Side Rendering): For the admin back-end that I personally use, SEO is meaningless. The core needs are rich interactivity and efficient creation processes, a kind of "application" experience. Therefore, the admin back-end is designed as a pure client-side rendering (CSR) single-page application (SPA). On the first load, the browser downloads the application framework, and all subsequent operations - whether writing articles, uploading images, or calling AI functions - are dynamically completed on the front-end through asynchronous API calls, without refreshing the entire page. This provides a smooth and efficient management experience like desktop software.
This "dual front-end" strategy allows me to avoid painful trade-offs between SEO and back-end interactivity, and instead provides the current industry's "best solution" for both scenarios.
From the beginning of the project, I decided to make bilingual (Chinese/English) support a core feature, not a patch added later. This decision influenced the design of the entire architecture from top to bottom.
Language Imprint on Front-End Routing: Internationalization is first reflected in the URL design. A clear path prefix scheme is used (yourdomain.com/zh/...
and yourdomain.com/en/...
). This design is extremely friendly to both users and search engines, as it clearly identifies the language affiliation of the content, allowing Next.js to easily load the corresponding language resources and content based on the URL, and to perform independent SEO optimization for pages in different languages.
Inclusive Back-End Data Model: To support front-end language switching, the back-end data model must have the ability to store multilingual content. My design avoids the clumsy practice of creating duplicate article records for each language. Instead, at the database level, the "title," "summary," "body," and other core fields of the article are designed with structures that can accommodate multilingual versions. This means that a logical article simultaneously contains Chinese and English content in its data record. The API interface dynamically extracts and returns data in the corresponding language version based on the language identifier passed in from the front-end request. This design greatly simplifies content management and ensures data consistency between different language versions.
If the overall architecture is the skeleton, then the back-end is the heart and brain of this system. Each technology choice here is not a trend, but a direct response to the bottlenecks encountered in the Express prototype in Act Two, a profound reflection on performance, efficiency, and long-term maintainability.
The choice of back-end technology stack aims to establish a high-performance, resource-saving, and easily deployable service.
Go/Gin: The "Best Solution" for Performance and Deployment After the Express prototype exposed the two major problems of performance and deployment package size, I turned my attention to compiled languages. Choosing Go was a strategic decision.
node_modules
directory. The final production image size dropped dramatically from over 100MB in the Express era to within 10MB. This is not just a reduction in size, but an exponential decrease in operational complexity.PostgreSQL vs. MySQL: A Deep Reflection on Data Models The choice of database is equally critical. Although MySQL is a widely used and excellent database, I ultimately chose PostgreSQL because some of its advanced features are highly compatible with the design concept of my project.
API Design Philosophy: Clear, Consistent RESTful Practices
I follow widely recognized RESTful design principles to build APIs. This means using standard HTTP methods (GET
, POST
, PUT
, DELETE
) to correspond to the query, addition, modification, and deletion operations of resources, and using URLs to clearly identify resources. All requests will be processed by unified middleware for logging, user authentication, and data verification. The response adopts the standard JSON format and is accompanied by accurate HTTP status codes. This design makes the intent of the API clear and the behavior predictable, which greatly simplifies front-end and back-end debugging and future maintenance.
AI as a Service Gateway: Strategic Reverse Proxy Pattern This is one of the most strategic designs in the back-end architecture. The implementation of all AI functions is not directly calling third-party AI APIs from the front-end, but through building an "AI gateway" inside the Go back-end.
POST /api/ai/translate
and pass in the text to be translated, without caring which vendor is behind it and its complex API parameters. All adaptation work is done in the back-end.The front-end is the "face" of the entire system. Here, all the powerful capabilities of the back-end must be transformed into a user-friendly, intuitive, and efficient experience. My front-end architecture design strictly follows the principles set in the "dual front-end strategy": providing a seamless and fast reading experience for public readers, and providing an AI-powered efficient creation environment for back-end authors (me).
The goal of the public blog is to make content consumption as smooth and natural as possible, regardless of the language used by the reader. Internationalization (i18n) is not an add-on feature, but part of the core experience.
/en/
vs /zh/
) is the cornerstone of internationalization. In Next.js, this means that the route itself becomes the single source of truth for the "language state". Whether the user accesses it directly through a link or switches languages on the page, the change in the URL will drive the content and interface language of the entire page to be refreshed. This design is not only extremely SEO-friendly, but also makes the URL itself have clear semantic capabilities.en.json
, zh.json
). The application will automatically load the corresponding JSON file according to the language of the current route, so as to render the text of all interface "skeleton" parts into the correct language./en/my-post
and /zh/my-post
, they will each obtain a complete, pre-rendered HTML page with fully translated content and metadata (meta tags
). I also carefully set the hreflang
tag in the <head>
section of the page to clearly declare to the search engine that these two URLs are different language versions of the same content. This ensures that my blog content can get the best exposure opportunities in both English and Chinese search results.The design philosophy of the admin back-end is "efficiency first". I need to free myself from tedious and repetitive labor and focus on creation itself. Therefore, AI is not a flashy decoration here, but as my "creation co-pilot", deeply integrated into every link of the workflow.
Design Concept: AI is a Tool to Eliminate Creation Friction Each AI function that I integrate directly points to a specific pain point in the creation process. The core goal is to reduce flow interruptions and automate non-core tasks.
User Experience: Seamlessly Embedded AI Interaction Process The interaction between the author (me) and AI is designed to be as intuitive and imperceptible as possible, and they appear where they are most needed.
In the end, this admin back-end is no longer a simple CMS (content management system), but has evolved into an AI-enhanced integrated writing application customized to improve creation efficiency.
The philosophy of DevOps is to closely integrate development (Dev) and operations (Ops), and realize the rapid and reliable delivery of software through automated tools and processes. My goal is to establish an ideal workflow of "one submission, automatic deployment" to completely free my energy from repetitive operation and maintenance tasks.
Containerization is the cornerstone of achieving environmental consistency, but what I pursue is not simple containerization, but extreme efficiency. This concept is directly reflected in the design of the Dockerfile, and I adopted the Multi-stage builds strategy for both the Go back-end and the Next.js front-end.
Go Back-End Image Slimming Technique: This process is divided into two acts.
golang:1.22-alpine
) containing the complete Go compilation environment as a "temporary factory". In this stage, I copy the source code, download all dependencies, and then execute go build
to finally compile a standalone, statically linked binary executable file.alpine:latest
, or theoretically even a scratch
empty image). Then, I only copy the compiled binary file from the "factory" of the first stage. This final production image does not contain any Go compiler, source code, or unnecessary libraries, and its size is usually only 10-20MB. This is the ultimate solution to the problem of bloated images of the Express prototype in the second act.Optimized Build of Next.js Front-End: The same philosophy also applies to the front-end.
npm install
), and then execute npm run build
. This process will generate an optimized .next
directory for the production environment.node_modules
, .next
directory, and package.json
files that are necessary for production from the first stage. All devDependencies
that are only needed during construction are completely discarded. This ensures that the production image of the front-end service is also streamlined and secure.Multi-stage builds are an elegant compromise that allows us to enjoy the convenience of the complete toolchain during build time, while obtaining the most minimized and safest deployment artifacts during runtime.
If Dockerfile is a blueprint customized for a single service, then Docker Compose is the score that directs the entire application ecosystem to work together. I maintain a docker-compose.yml
file in the root directory of the project, which I use to declaratively define the entire application stack:
frontend
, backend
, database
, nginx
.docker-compose.yml
file is the key to ensuring a high degree of consistency between the development and production environments. When I develop locally, I use the same configuration (possibly fine-tuned by overwriting files), and can pull up the complete application with one click using the docker-compose up
command, which greatly avoids the classic problem of "can run on my computer".Nginx as the Core Reverse Proxy: Nginx is the only "gatekeeper" in my system, the entry and security barrier for all network traffic. Its configuration precisely defines the routing logic: all requests for the /api/
path are forwarded to the backend
service in the internal network; all other page requests are forwarded to the frontend
service. This method perfectly hides the back-end service architecture and centrally handles cross-cutting concerns such as request logs and traffic limiting. More importantly, Nginx is responsible for SSL/TLS termination, which handles all incoming HTTPS encrypted traffic and then communicates with the back-end service in a non-encrypted manner in a secure internal network, simplifying the configuration of the back-end service.
Certbot Realizes HTTPS Automation: In order to implement the full-site HTTPS, I use Certbot and Let's Encrypt to realize the automated management of TLS certificates. Certbot runs in a containerized manner, and it can automatically complete the entire process of applying, verifying, and configuring Nginx to use the new certificate. Moreover, it will automatically renew the certificate before it expires through a background scheduled task (Cron Job). This is a typical "Set-it-and-forget-it" solution that provides bank-level security for my website at zero cost and zero manual intervention.
This is the last piece of the puzzle for the entire DevOps closed loop. I use GitHub Actions to design a fully automated pipeline from code submission to online deployment.
Workflow Design: This workflow is automatically triggered every time I push a new tag
.
docker-compose pull
pulls the latest image just pushed to the repository, and then docker-compose up -d
smoothly replaces the old container with the new container in a rolling update manner to complete the non-sensing update of the service.Pragmatic Choice for Admin Back-End: rsync
: In particular, for the admin back-end, which is completely client-side rendered (CSR), its build artifacts are a bunch of pure static files (HTML/CSS/JS). Although it can also be packaged into a container, I choose a lighter and more pragmatic deployment method. After building this part, the CI/CD pipeline will directly use the rsync
command to synchronize these static files to the directory specified by Nginx via SSH. rsync
only transmits changed files, which makes the deployment process extremely fast. This choice reflects an engineering flexibility - not blindly pursuing technological unity, but choosing the most efficient solution for a specific problem.
In today's front-end engineering field, Serverless platforms such as Vercel and Netlify provide an almost magical one-click deployment experience, which greatly reduces the threshold for applications to go online. So, when I already have a modern front-end based on Next.js, why did I finally choose a more "traditional" and more "laborious" path - self-hosting all services on a cloud server (VPS)?
This is not out of rejection of new technologies, but a well-considered decision about control, cost, learning value, and long-term strategy.
First, I must admit the huge advantages of platforms like Vercel. For pure front-end projects or teams that want to reduce operation and maintenance costs to zero, they are undoubtedly excellent choices. The CI/CD, global CDN, Serverless functions, and other functions they provide represent the most cutting-edge engineering practices in the industry.
However, my personal blog project carries not only content, but also a complete test field for my personal technology system. In this context, the advantages of self-hosting on VPS become prominent.
Absolute Cost Controllability Vercel and other platforms usually adopt a Usage-based payment model. This model is very friendly when the traffic is low, but when your website traffic suddenly surges, or a certain function (such as AI call) is used frequently, the bill may increase unpredictably. A VPS provides a fixed cost model. I pay a fixed rent every month and get all the resources of this server. Whether there are 100 visitors or 10,000 visitors per day, my cost is constant and predictable. This certainty allows me to experiment and promote with peace of mind without having to worry about the risk of "cost explosion" at all times.
Complete Technical Control This is the most core point. On platforms like Vercel, I am a "tenant" and must abide by the landlord's rules. I cannot choose a specific database version, cannot deeply customize Nginx's caching or routing strategy, and cannot freely install any back-end software I need. On my own VPS, I have root permissions - I am the owner of this digital territory. I can decide the operating system, precisely control every configuration parameter of PostgreSQL to optimize performance, deploy other experimental services besides the blog, and set complex firewall rules. This complete control means that I can squeeze the performance of the entire technology stack to the extreme and expand in any dimension according to my own wishes.
Priceless Practical Learning Value Choosing VPS is choosing a steeper but more magnificent learning curve. Configuring a Linux server from scratch, setting up the network and firewall, managing SSH keys, troubleshooting Docker network problems, manually configuring Nginx reverse proxy, automating TLS certificate renewal... Every challenge and every pitfall in these processes constitutes valuable practical experience that books and tutorials cannot provide. It forces me to grow from a "developer" who can only write code into a "full-stack engineer" who understands from code to server, to network and security. This VPS is not only my server, but also my best technical mentor and the personal laboratory with the lowest cost.
Avoid Vendor Lock-In and Embrace Architectural Freedom
When I build all applications on Vercel's proprietary ecosystem, my technology stack is deeply bound to the platform. If I want to migrate in the future, I will face huge reconstruction costs. My current architecture, from the operating system (Linux) to the containerization technology (Docker), from the database (PostgreSQL) to the Web server (Nginx), is all based on open, industry-standard standards. My docker-compose.yml
file is like a list of "digital containers", which can theoretically be "moved" to any cloud platform that supports Docker (such as AWS EC2, Google Cloud, DigitalOcean, etc.), or even a physical server, and quickly started at a very low cost. This architectural portability and freedom is the ultimate guarantee to avoid being "locked" by a single vendor and an important long-term technology strategy.
Choosing VPS is essentially choosing a "heavy asset" path. It requires more time and energy, but the reward is unparalleled control, predictable costs, deep system-level understanding, and technical freedom that is not controlled by others. For engineers who regard personal projects as part of their own technical growth, this investment has a very high return on investment. This is the final curtain call of my four-act play and the complete closed loop of my engineering philosophy.
So far, this "four-act play" about my personal blog architecture has been fully presented. We started from the pain of publishing in the Hexo static era, went through the bottlenecks of the Express prototype in terms of performance and size, witnessed how the Go language brought a critical leap forward with its lightness and efficiency, and finally arrived at today's mature form integrated with AI empowerment, bilingual support, and running stably on the self-controlled VPS.
Looking back at this evolutionary path, I am increasingly clearly aware that the essence of architecture is not the accumulation of "correct" technologies, but the continuous exploration and iteration of the "most suitable" solutions at specific stages. Every decision, whether it is choosing PostgreSQL's JSONB
to embrace complex data models, or building an AI gateway in the back-end to exchange for security and flexibility, or adopting different deployment strategies for different components in CI/CD, all stem from deep reflection on past problems and active layout for future possibilities.
Finally, choosing the path of self-hosting on VPS is a concentrated embodiment of this philosophy. What it represents is a cautiousness about short-term convenience, a reverence for underlying principles, and an unremitting pursuit of technical control and long-term value.
This blog system has long surpassed the scope of an online publishing tool for me. It is a container of my thoughts, a sandbox of my technology, and a vivid proof that I, as an engineer, continue to learn, think, and create in the ever-changing digital world.
This journey has no end. And that, perhaps, is the most fascinating thing about technology.
The following are the technology stacks used and their official websites: