The ideal shore
CategoriesTagsContactFriendsAbout

This site is powered by KevinYouu using Next.js.

PV: 0 UV: 0 | Total PV: 0 Total UV: 0

Website Runtime: 0 year 0 months 0 days 0 hours 0 minutes 0 seconds

Farewell to Static and Compromise: My Personal Blog's Full-Stack Architecture Evolution (Go + Next.js + Docker + VPS)

Farewell to Static and Compromise: My Personal Blog's Full-Stack Architecture Evolution (Go + Next.js + Docker + VPS)

Frontend
Go
TypeScript
React
Other languages: 简体中文
Created: 11/04/2023
Updated: 06/17/2025
Word count: 5297
Reading time: 26.48minutes

This article will completely abandon code and purely from the perspective of a developer and architect, fully review the entire evolution process of my personal blog from a Hexo static site to a set of technology stacks based on Go, React, Next.js, etc. You will see how a system, in the ultimate pursuit of performance, efficiency, and creation experience, goes through a four-act drama-like iteration, and finally forms a mature form deployed on its own VPS, with both high performance and complete control.

Introduction: Blog Evolution - A Four-Act Play of Personal Architecture

For every engineer passionate about technology, their personal blog should be more than just a platform for publishing content; it's a proving ground for ideas, a living fossil recording the evolution of their tech stack. It honestly reflects our understanding, struggles, and breakthroughs at different stages. The architecture I'm about to dissect today isn't a design blueprint created in one fell swoop, but rather the final chapter of an evolution spanning several years and four acts of a technical drama. This story is not just about technology choices, but about my engineering philosophy as an engineer, constantly iterating on the path to autonomy, efficiency, and control.

Act One began with a compromise for convenience. I briefly lingered in the grandeur of WordPress but was quickly dissuaded by its bloated system. Subsequently, I embraced Hexo, enjoying three years of pure static writing. However, the price of this purity was a rigid process - every minor modification meant a complete local generation, submission, and deployment. This "developer-only" publishing experience gradually evolved from initial convenience to a fundamental pain point restricting my creative process, igniting the first spark of change.

Act Two was my hesitant steps from a front-end developer toward a full-stack mindset. I built my first real back-end prototype with Express.js, successfully implementing user authentication and dynamic article management (CRUD). This was a huge leap at the time, proving my ability to break free from static constraints. But this prototype quickly ran into two walls: a Docker image size exceeding 100MB and the performance level that was barely usable (each local request took over 100ms in the early stages). The emergence of these bottlenecks made me clearly realize that there was a huge gap between simply being "usable" and building a "good" system.

Act Three was the awakening of performance and efficiency. Faced with the problems exposed by the Express prototype, I stood at a new technical crossroads. After evaluating Java and Go, comparing syntax and post-build size, I chose the latter. This decision was a key turning point in the entire evolution. Go, after compilation, compressed the back-end service into an extremely small, single binary file, completely eradicating the stubborn problem of bloated images; its inherent concurrency performance and extremely low resource consumption perfectly responded to my thirst for performance.

Act Four, which is the final form we see today, is the culmination and sublimation of this long evolution. It not only completely solves the pain points encountered in all past stages - from the convenience of publishing to the lightweight deployment, to the performance of the service - but, more importantly, it builds a solid foundation for the future. Through deep integration of AI-assisted creation and seamless bilingual internationalization support, the creative experience provided by this architecture has surpassed the imagination of any previous stage.

This article is not a cold technical specification. I will take you through the stage of these four acts, deeply interpreting the "what," "why," and "how I was thinking" behind every design decision of this final architecture. This is not only a complete record of my personal technical growth but also the engineering philosophy that I hope to share with you: how to build your own, sustainable, evolving system.

Module 1: Overall Project Architecture Design

After exploring static generation and back-end prototypes, the final architectural form must have high cohesion and low coupling, and be able to clearly reflect my core design philosophy. It's not a single, monolithic entity, but an organic ecosystem where multiple focused, efficient services work together.

Architecture Blueprint: A Containerized Ecosystem Driven by a Gateway

From a bird's-eye view, the entire system is designed as a set of services running in independent containers, coordinated through a unified entry gateway. The core of this blueprint is separation of responsibilities and clear communication.

  • Core Gateway (Nginx): The throat of the entire system is Nginx. It's not just a web server, but also an intelligent traffic scheduling center and security barrier. All external requests, whether from readers or administrators, first arrive at Nginx. It's responsible for parsing the intent of the request and forwarding them precisely to the correct downstream service.

  • Front-End Service (Next.js Container): The container responsible for the user interface and experience. It hosts the public-facing blog.

  • Admin Back-End Front-End Service (React SPA): The admin back-end that I use for managing articles (CRUD).

  • Back-End Service (Go API Container): This is the "brain" of the system, a pure, headless API server. It's responsible for all business logic, data persistence, user authentication, and interaction with the external world. Nginx seamlessly forwards all requests explicitly directed to the API to this service.

  • External Dependencies (AI Services): Beyond the boundaries of this blueprint, there are third-party AI services. The back-end Go service acts as an internal gateway, the only component in the system authorized to communicate with these external AI APIs. The front-end application never directly calls AI services; all AI-powered functions must be relayed through our own back-end API.

Dual Front-End Strategy: Creating Optimal Experiences for Readers and Authors

Why does a blog need two "front-ends"? The core of this design lies in my deep understanding that "public readers" and "back-end authors" are two completely different users, whose core needs are even contradictory. Forcing them into a single technical framework inevitably leads to compromise. Therefore, I adopted a strategic separation architecture.

  • Public Blog (SSR/ISR - Server-Side Rendering/Incremental Static Regeneration): For public-facing blog posts, the most important metrics are first-screen loading speed and search engine friendliness (SEO). Next.js's server-side rendering (SSR) and incremental static regeneration (ISR) strategies are designed for this purpose. It can generate complete HTML pages directly on the server-side or generate static pages at build time and update them on demand in the background. This means that readers and search engine crawlers can receive meaningful content at the first moment, achieving the ultimate performance experience and the best SEO results.

  • Admin Back-End (CSR - Client-Side Rendering): For the admin back-end that I personally use, SEO is meaningless. The core needs are rich interactivity and efficient creation processes, a kind of "application" experience. Therefore, the admin back-end is designed as a pure client-side rendering (CSR) single-page application (SPA). On the first load, the browser downloads the application framework, and all subsequent operations - whether writing articles, uploading images, or calling AI functions - are dynamically completed on the front-end through asynchronous API calls, without refreshing the entire page. This provides a smooth and efficient management experience like desktop software.

This "dual front-end" strategy allows me to avoid painful trade-offs between SEO and back-end interactivity, and instead provides the current industry's "best solution" for both scenarios.

Internationalization (i18n) Design: Born Global

From the beginning of the project, I decided to make bilingual (Chinese/English) support a core feature, not a patch added later. This decision influenced the design of the entire architecture from top to bottom.

  • Language Imprint on Front-End Routing: Internationalization is first reflected in the URL design. A clear path prefix scheme is used (yourdomain.com/zh/... and yourdomain.com/en/...). This design is extremely friendly to both users and search engines, as it clearly identifies the language affiliation of the content, allowing Next.js to easily load the corresponding language resources and content based on the URL, and to perform independent SEO optimization for pages in different languages.

  • Inclusive Back-End Data Model: To support front-end language switching, the back-end data model must have the ability to store multilingual content. My design avoids the clumsy practice of creating duplicate article records for each language. Instead, at the database level, the "title," "summary," "body," and other core fields of the article are designed with structures that can accommodate multilingual versions. This means that a logical article simultaneously contains Chinese and English content in its data record. The API interface dynamically extracts and returns data in the corresponding language version based on the language identifier passed in from the front-end request. This design greatly simplifies content management and ensures data consistency between different language versions.

Module 2: In-Depth Analysis of the Back-End Architecture

If the overall architecture is the skeleton, then the back-end is the heart and brain of this system. Each technology choice here is not a trend, but a direct response to the bottlenecks encountered in the Express prototype in Act Two, a profound reflection on performance, efficiency, and long-term maintainability.

Technology Selection Philosophy: Pursuing Ultimate Efficiency and Simplicity

The choice of back-end technology stack aims to establish a high-performance, resource-saving, and easily deployable service.

  • Go/Gin: The "Best Solution" for Performance and Deployment After the Express prototype exposed the two major problems of performance and deployment package size, I turned my attention to compiled languages. Choosing Go was a strategic decision.

    1. Performance and Resource Efficiency: Go is inherently designed for concurrency. Its lightweight coroutine (Goroutines) model allows me to handle a large number of concurrent requests at a very low cost, which is essential for a public-facing API service. Compared to interpreted languages, Go-compiled code is highly efficient and has very low memory consumption, which perfectly solves the performance pain point of the Express prototype.
    2. The Ultimate Embodiment of Deployment Simplicity: This is the characteristic of Go that fascinates me the most. The Go compiler can package the entire application, including all dependencies, into a lightweight, single binary file with no external dependencies. This means that my Docker image no longer needs to contain a large Node.js runtime and a complex node_modules directory. The final production image size dropped dramatically from over 100MB in the Express era to within 10MB. This is not just a reduction in size, but an exponential decrease in operational complexity.
    3. Why Gin Framework? I chose Gin because it perfectly fits Go's design philosophy - "less is more". Gin is a minimalist Web framework that provides powerful routing, middleware, and request handling functions, but does not introduce any unnecessary complexity. It is fast and stable enough to allow me to focus on the business logic itself, rather than fighting the framework.
  • PostgreSQL vs. MySQL: A Deep Reflection on Data Models The choice of database is equally critical. Although MySQL is a widely used and excellent database, I ultimately chose PostgreSQL because some of its advanced features are highly compatible with the design concept of my project.

    1. MySQL Compatibility Issues: I encountered many MySQL compatibility issues when using WordPress, such as upgrading from 5.8 to 8 (I heard that the latest version 9 seems to have lower performance than 8?), which caused a lot of trouble for me at the time with insufficient technology. This is also one of the reasons for subsequent migration to a purely static blog.
    2. Stronger Standard Compatibility and Data Integrity: PostgreSQL is known for its strict adherence to SQL standards. In the long run, this means more predictable behavior and higher data integrity guarantees, which is crucial for storing core creative content.
    3. Excellent Concurrent Performance (MVCC): PostgreSQL adopts a multi-version concurrency control (MVCC) mechanism. Simply put, read operations do not block write operations, and vice versa. This provides a solid guarantee for the performance of the system in high-concurrency read and write scenarios, which is a forward-looking technical vision.
    4. Active Community and Extensibility: A vibrant open source community and a rich extension ecosystem mean that this technology will have sustained vitality and problem-solving capabilities in the future, and the extensions I need usually have mature solutions, such as DuckDB and so on.
    5. Popular Trends: In the StackOverflow survey starting in 2020, PostgreSQL's popularity has surpassed MySQL and is growing every year.

API Design and AI Gateway: Building Secure, Stable, and Flexible Service Interfaces

  • API Design Philosophy: Clear, Consistent RESTful Practices I follow widely recognized RESTful design principles to build APIs. This means using standard HTTP methods (GET, POST, PUT, DELETE) to correspond to the query, addition, modification, and deletion operations of resources, and using URLs to clearly identify resources. All requests will be processed by unified middleware for logging, user authentication, and data verification. The response adopts the standard JSON format and is accompanied by accurate HTTP status codes. This design makes the intent of the API clear and the behavior predictable, which greatly simplifies front-end and back-end debugging and future maintenance.

  • AI as a Service Gateway: Strategic Reverse Proxy Pattern This is one of the most strategic designs in the back-end architecture. The implementation of all AI functions is not directly calling third-party AI APIs from the front-end, but through building an "AI gateway" inside the Go back-end.

    1. Absolute API Key Security: It is extremely dangerous to expose the API Key of a third-party AI service directly in the front-end code. Through the AI gateway model, these sensitive keys are securely stored in the environment variables of the back-end server, and the front-end application knows nothing about it. The front-end only communicates with my own back-end API, which acts as a trusted proxy to forward the request and attach the key to the external AI service.
    2. Stable Front-End Interface: The API of external AI service providers may change, and even their request/response structure is very complex. My AI gateway provides a stable and simplified interface for the front-end. For example, the front-end only needs to call POST /api/ai/translate and pass in the text to be translated, without caring which vendor is behind it and its complex API parameters. All adaptation work is done in the back-end.
    3. Future Flexibility to Change Suppliers: The biggest advantage of this design is decoupling. If I find an AI vendor with better results or lower costs in the future, I only need to modify the internal implementation of the Go back-end AI gateway module without touching any line of code in the front-end application. This model completely avoids vendor lock-in and reserves the greatest flexibility and initiative for the future technological evolution of the system.

Module 3: Front-End Architecture and User Experience

The front-end is the "face" of the entire system. Here, all the powerful capabilities of the back-end must be transformed into a user-friendly, intuitive, and efficient experience. My front-end architecture design strictly follows the principles set in the "dual front-end strategy": providing a seamless and fast reading experience for public readers, and providing an AI-powered efficient creation environment for back-end authors (me).

Public Blog: Creating a Seamless Bilingual Reading Experience

The goal of the public blog is to make content consumption as smooth and natural as possible, regardless of the language used by the reader. Internationalization (i18n) is not an add-on feature, but part of the core experience.

  • i18n Implementation Ideas: Routing as the Core, Resources as the Auxiliary
    1. Routing as Language Declaration: As mentioned in the overall design, path-based routing (/en/ vs /zh/) is the cornerstone of internationalization. In Next.js, this means that the route itself becomes the single source of truth for the "language state". Whether the user accesses it directly through a link or switches languages on the page, the change in the URL will drive the content and interface language of the entire page to be refreshed. This design is not only extremely SEO-friendly, but also makes the URL itself have clear semantic capabilities.
    2. Separated Content Management: Translated content is divided into two categories. The first category is dynamic content, that is, the title, body, etc. of the article, which are provided by the back-end API according to the language parameters requested by the front-end. The second category is static UI text, such as the "Home" on the navigation bar, the "Read More" at the end of the article, and the copyright information in the footer. For this type of text, I adopted the scheme of language resource files (such as en.json, zh.json). The application will automatically load the corresponding JSON file according to the language of the current route, so as to render the text of all interface "skeleton" parts into the correct language.
    3. Achieve Ultimate SEO for Bilingual: The combination of SSR/ISR strategy and internationalized routing brings a multiplier effect to SEO. When search engine crawlers crawl /en/my-post and /zh/my-post, they will each obtain a complete, pre-rendered HTML page with fully translated content and metadata (meta tags). I also carefully set the hreflang tag in the <head> section of the page to clearly declare to the search engine that these two URLs are different language versions of the same content. This ensures that my blog content can get the best exposure opportunities in both English and Chinese search results.

Admin Back-End: An AI-Powered Integrated Creation Studio

The design philosophy of the admin back-end is "efficiency first". I need to free myself from tedious and repetitive labor and focus on creation itself. Therefore, AI is not a flashy decoration here, but as my "creation co-pilot", deeply integrated into every link of the workflow.

  • Design Concept: AI is a Tool to Eliminate Creation Friction Each AI function that I integrate directly points to a specific pain point in the creation process. The core goal is to reduce flow interruptions and automate non-core tasks.

    • Chinese-English Mutual Translation: Solves the tedious process of manually copying, pasting, and translating between different language versions.
    • AI Generates Covers: Solves the time consumption of finding or designing a high-quality and highly relevant image for each article.
    • AI Generates Prompts: Solves the "writing block" that may be encountered when conceiving titles, summaries, or social media copy.
  • User Experience: Seamlessly Embedded AI Interaction Process The interaction between the author (me) and AI is designed to be as intuitive and imperceptible as possible, and they appear where they are most needed.

    1. One-Click Translation Experience: After I finish writing a Chinese article, there will be a "Translate with AI" button below the English content area next to it. After clicking, the system will call the back-end AI gateway to fill the translated English title and body into the corresponding input box within a few seconds. I don't need to leave the current page, just fine-tune and polish based on the AI generation. The whole process is coherent and efficient.
    2. Context-Aware Cover Generation: In the article settings area, next to the traditional "Upload Cover" function, there is an "AI Generate Cover" option. After clicking, the system can automatically extract the title or summary of the article as the blueprint for the generation prompt. I can directly use this prompt or modify it, and then the system will call the image generation model to return several candidate images. I only need to click on my favorite picture and it will be automatically set as the article cover. This entire process compresses the original "finding a picture" work that might take more than ten minutes to within one minute.
    3. Inspiration Catalyst: There is a small "magic wand" icon next to the title or summary input box. When I feel confused, click it, and AI will recommend several different styles of titles or summaries for me to choose from based on the existing body content. It doesn't write for me, but provides inspiration and breaks the deadlock in thinking.

In the end, this admin back-end is no longer a simple CMS (content management system), but has evolved into an AI-enhanced integrated writing application customized to improve creation efficiency.

Module 4: DevOps and Automated Deployment

The philosophy of DevOps is to closely integrate development (Dev) and operations (Ops), and realize the rapid and reliable delivery of software through automated tools and processes. My goal is to establish an ideal workflow of "one submission, automatic deployment" to completely free my energy from repetitive operation and maintenance tasks.

Containerization Strategy: Pursuing Minimal Multi-Stage Builds

Containerization is the cornerstone of achieving environmental consistency, but what I pursue is not simple containerization, but extreme efficiency. This concept is directly reflected in the design of the Dockerfile, and I adopted the Multi-stage builds strategy for both the Go back-end and the Next.js front-end.

  • Go Back-End Image Slimming Technique: This process is divided into two acts.

    1. Build Stage: I use an official image (such as golang:1.22-alpine) containing the complete Go compilation environment as a "temporary factory". In this stage, I copy the source code, download all dependencies, and then execute go build to finally compile a standalone, statically linked binary executable file.
    2. Run Stage: I start a new, extremely streamlined base image (such as alpine:latest, or theoretically even a scratch empty image). Then, I only copy the compiled binary file from the "factory" of the first stage. This final production image does not contain any Go compiler, source code, or unnecessary libraries, and its size is usually only 10-20MB. This is the ultimate solution to the problem of bloated images of the Express prototype in the second act.
  • Optimized Build of Next.js Front-End: The same philosophy also applies to the front-end.

    1. Build Stage: In an image containing the complete Node.js environment, I install all development and production dependencies (npm install), and then execute npm run build. This process will generate an optimized .next directory for the production environment.
    2. Run Stage: I switch to a lighter Node.js runtime image. I only copy the node_modules, .next directory, and package.json files that are necessary for production from the first stage. All devDependencies that are only needed during construction are completely discarded. This ensures that the production image of the front-end service is also streamlined and secure.

Multi-stage builds are an elegant compromise that allows us to enjoy the convenience of the complete toolchain during build time, while obtaining the most minimized and safest deployment artifacts during runtime.

Orchestration and Deployment: Docker Compose as a Unified Conductor

If Dockerfile is a blueprint customized for a single service, then Docker Compose is the score that directs the entire application ecosystem to work together. I maintain a docker-compose.yml file in the root directory of the project, which I use to declaratively define the entire application stack:

  • Service Definition: Clearly list all services such as frontend, backend, database, nginx.
  • Network: Connect all services to a custom private virtual network to ensure safe and efficient communication between them, while isolating them from the outside.
  • Data Persistence: Configure named volumes for the PostgreSQL database to ensure that even if the container is destroyed and rebuilt, the core data can be permanently saved.
  • Environmental Consistency: This docker-compose.yml file is the key to ensuring a high degree of consistency between the development and production environments. When I develop locally, I use the same configuration (possibly fine-tuned by overwriting files), and can pull up the complete application with one click using the docker-compose up command, which greatly avoids the classic problem of "can run on my computer".

Network and Security: Automated Barriers for Nginx and Certbot

  • Nginx as the Core Reverse Proxy: Nginx is the only "gatekeeper" in my system, the entry and security barrier for all network traffic. Its configuration precisely defines the routing logic: all requests for the /api/ path are forwarded to the backend service in the internal network; all other page requests are forwarded to the frontend service. This method perfectly hides the back-end service architecture and centrally handles cross-cutting concerns such as request logs and traffic limiting. More importantly, Nginx is responsible for SSL/TLS termination, which handles all incoming HTTPS encrypted traffic and then communicates with the back-end service in a non-encrypted manner in a secure internal network, simplifying the configuration of the back-end service.

  • Certbot Realizes HTTPS Automation: In order to implement the full-site HTTPS, I use Certbot and Let's Encrypt to realize the automated management of TLS certificates. Certbot runs in a containerized manner, and it can automatically complete the entire process of applying, verifying, and configuring Nginx to use the new certificate. Moreover, it will automatically renew the certificate before it expires through a background scheduled task (Cron Job). This is a typical "Set-it-and-forget-it" solution that provides bank-level security for my website at zero cost and zero manual intervention.

CI/CD Workflow: Automated Highway Driven by GitHub Actions

This is the last piece of the puzzle for the entire DevOps closed loop. I use GitHub Actions to design a fully automated pipeline from code submission to online deployment.

  • Workflow Design: This workflow is automatically triggered every time I push a new tag.

    1. Build and Push: The workflow will first execute a multi-stage Docker build to generate optimized front-end and back-end images. After the build is successful, these images will be tagged with a unique label (such as the Git commit hash) and pushed to a container image repository such as GitHub Container Registry.
    2. Secure Deployment: The last step of the pipeline is to securely connect to my cloud server (VPS) via SSH. After the connection is successful, it will execute a series of pre-set script commands on the server: docker-compose pull pulls the latest image just pushed to the repository, and then docker-compose up -d smoothly replaces the old container with the new container in a rolling update manner to complete the non-sensing update of the service.
  • Pragmatic Choice for Admin Back-End: rsync: In particular, for the admin back-end, which is completely client-side rendered (CSR), its build artifacts are a bunch of pure static files (HTML/CSS/JS). Although it can also be packaged into a container, I choose a lighter and more pragmatic deployment method. After building this part, the CI/CD pipeline will directly use the rsync command to synchronize these static files to the directory specified by Nginx via SSH. rsync only transmits changed files, which makes the deployment process extremely fast. This choice reflects an engineering flexibility - not blindly pursuing technological unity, but choosing the most efficient solution for a specific problem.

Module 5: Deployment Philosophy - Why VPS Instead of Vercel?

In today's front-end engineering field, Serverless platforms such as Vercel and Netlify provide an almost magical one-click deployment experience, which greatly reduces the threshold for applications to go online. So, when I already have a modern front-end based on Next.js, why did I finally choose a more "traditional" and more "laborious" path - self-hosting all services on a cloud server (VPS)?

This is not out of rejection of new technologies, but a well-considered decision about control, cost, learning value, and long-term strategy.

First, I must admit the huge advantages of platforms like Vercel. For pure front-end projects or teams that want to reduce operation and maintenance costs to zero, they are undoubtedly excellent choices. The CI/CD, global CDN, Serverless functions, and other functions they provide represent the most cutting-edge engineering practices in the industry.

However, my personal blog project carries not only content, but also a complete test field for my personal technology system. In this context, the advantages of self-hosting on VPS become prominent.

  1. Absolute Cost Controllability Vercel and other platforms usually adopt a Usage-based payment model. This model is very friendly when the traffic is low, but when your website traffic suddenly surges, or a certain function (such as AI call) is used frequently, the bill may increase unpredictably. A VPS provides a fixed cost model. I pay a fixed rent every month and get all the resources of this server. Whether there are 100 visitors or 10,000 visitors per day, my cost is constant and predictable. This certainty allows me to experiment and promote with peace of mind without having to worry about the risk of "cost explosion" at all times.

  2. Complete Technical Control This is the most core point. On platforms like Vercel, I am a "tenant" and must abide by the landlord's rules. I cannot choose a specific database version, cannot deeply customize Nginx's caching or routing strategy, and cannot freely install any back-end software I need. On my own VPS, I have root permissions - I am the owner of this digital territory. I can decide the operating system, precisely control every configuration parameter of PostgreSQL to optimize performance, deploy other experimental services besides the blog, and set complex firewall rules. This complete control means that I can squeeze the performance of the entire technology stack to the extreme and expand in any dimension according to my own wishes.

  3. Priceless Practical Learning Value Choosing VPS is choosing a steeper but more magnificent learning curve. Configuring a Linux server from scratch, setting up the network and firewall, managing SSH keys, troubleshooting Docker network problems, manually configuring Nginx reverse proxy, automating TLS certificate renewal... Every challenge and every pitfall in these processes constitutes valuable practical experience that books and tutorials cannot provide. It forces me to grow from a "developer" who can only write code into a "full-stack engineer" who understands from code to server, to network and security. This VPS is not only my server, but also my best technical mentor and the personal laboratory with the lowest cost.

  4. Avoid Vendor Lock-In and Embrace Architectural Freedom When I build all applications on Vercel's proprietary ecosystem, my technology stack is deeply bound to the platform. If I want to migrate in the future, I will face huge reconstruction costs. My current architecture, from the operating system (Linux) to the containerization technology (Docker), from the database (PostgreSQL) to the Web server (Nginx), is all based on open, industry-standard standards. My docker-compose.yml file is like a list of "digital containers", which can theoretically be "moved" to any cloud platform that supports Docker (such as AWS EC2, Google Cloud, DigitalOcean, etc.), or even a physical server, and quickly started at a very low cost. This architectural portability and freedom is the ultimate guarantee to avoid being "locked" by a single vendor and an important long-term technology strategy.

Choosing VPS is essentially choosing a "heavy asset" path. It requires more time and energy, but the reward is unparalleled control, predictable costs, deep system-level understanding, and technical freedom that is not controlled by others. For engineers who regard personal projects as part of their own technical growth, this investment has a very high return on investment. This is the final curtain call of my four-act play and the complete closed loop of my engineering philosophy.

Conclusion: Architecture as a Journey

So far, this "four-act play" about my personal blog architecture has been fully presented. We started from the pain of publishing in the Hexo static era, went through the bottlenecks of the Express prototype in terms of performance and size, witnessed how the Go language brought a critical leap forward with its lightness and efficiency, and finally arrived at today's mature form integrated with AI empowerment, bilingual support, and running stably on the self-controlled VPS.

Looking back at this evolutionary path, I am increasingly clearly aware that the essence of architecture is not the accumulation of "correct" technologies, but the continuous exploration and iteration of the "most suitable" solutions at specific stages. Every decision, whether it is choosing PostgreSQL's JSONB to embrace complex data models, or building an AI gateway in the back-end to exchange for security and flexibility, or adopting different deployment strategies for different components in CI/CD, all stem from deep reflection on past problems and active layout for future possibilities.

Finally, choosing the path of self-hosting on VPS is a concentrated embodiment of this philosophy. What it represents is a cautiousness about short-term convenience, a reverence for underlying principles, and an unremitting pursuit of technical control and long-term value.

This blog system has long surpassed the scope of an online publishing tool for me. It is a container of my thoughts, a sandbox of my technology, and a vivid proof that I, as an engineer, continue to learn, think, and create in the ever-changing digital world.

This journey has no end. And that, perhaps, is the most fascinating thing about technology.

References

The following are the technology stacks used and their official websites:

  • Next.js Official Website: https://nextjs.org
  • shadcn/ui Official Website: https://ui.shadcn.com
  • React Official Website: https://react.dev
  • Go Official Website: https://golang.org
  • PostgreSQL Official Website: https://www.postgresql.org
  • Ant Design Official Website: https://ant.design
  • Tailwind CSS Official Website: https://tailwindcss.com
  • Gin Official Website: https://gin-gonic.com
  • Gorm Official Website: https://gorm.io

Contents