**How to Use `std::variant` for Sum Types in Modern C++**

std::variant is a type-safe union that was introduced in C++17. It lets you store one value from a set of types in a single variable, much like a discriminated union in other languages. This feature is incredibly useful for modeling sum types, handling error states, or simply reducing the need for manual type checks.


1. Basic Declaration and Initialization

#include <variant>
#include <iostream>
#include <string>

int main() {
    std::variant<int, std::string> data = 42;           // holds an int
    std::variant<int, std::string> text = std::string("hello");

    std::cout << "int value: " << std::get<int>(data) << '\n';
    std::cout << "string value: " << std::get<std::string>(text) << '\n';
}
  • The variant is a template that takes an arbitrary number of types.
  • The contained value can be retrieved with `std::get (variant)`. If the wrong type is requested, a `std::bad_variant_access` exception is thrown.

2. Visiting – The Safe Way to Handle All Cases

Instead of manually checking the active type, use std::visit with a lambda or a functor.

#include <variant>
#include <iostream>
#include <string>

int main() {
    std::variant<int, std::string> data = "world";

    std::visit([](auto&& val) {
        using T = std::decay_t<decltype(val)>;
        if constexpr (std::is_same_v<T, int>) {
            std::cout << "int: " << val << '\n';
        } else if constexpr (std::is_same_v<T, std::string>) {
            std::cout << "string: " << val << '\n';
        }
    }, data);
}

std::visit will call the visitor with the currently stored value, allowing you to write type‑agnostic code.


3. Common Pitfalls

  1. Ambiguous Overloads
    If two types in the variant can be implicitly converted from the same expression, the compiler cannot decide which to use.

    std::variant<int, long> v = 5; // ambiguous (int or long)

    Use explicit construction: std::variant<int, long> v = int{5};

  2. Copying Variants with Large Types
    std::variant stores all alternatives in the same memory block. If one alternative is very large, consider storing it by pointer or using std::unique_ptr inside the variant.

  3. Missing Default Case in std::visit
    If you forget a case, the visitor will still compile because the lambda is generic. But runtime errors can happen. Use std::visit with a std::variant that contains all possible types you intend to handle.


4. Practical Use‑Case: Error Handling

Replace classic std::pair<bool, T> or custom error enums with a variant.

#include <variant>
#include <string>
#include <iostream>

struct Success {
    int result;
};

struct Error {
    std::string message;
};

using Result = std::variant<Success, Error>;

Result compute(int x) {
    if (x < 0)
        return Error{"Negative input"};
    return Success{x * 2};
}

int main() {
    auto res = compute(5);
    std::visit([](auto&& val){
        using T = std::decay_t<decltype(val)>;
        if constexpr (std::is_same_v<T, Success>) {
            std::cout << "Success: " << val.result << '\n';
        } else {
            std::cout << "Error: " << val.message << '\n';
        }
    }, res);
}

The variant cleanly encodes the “either” nature of the result, making the API easier to read and less error‑prone.


5. Extending with std::visit and Overload Sets

You can simplify visitors by combining overloaded lambdas:

template<class... Ts> struct overloaded : Ts... { using Ts::operator()...; };
template<class... Ts> overloaded(Ts...)->overloaded<Ts...>;

std::variant<int, std::string, double> v = 3.14;

std::visit(overloaded{
    [](int i){ std::cout << "int: " << i << '\n'; },
    [](const std::string& s){ std::cout << "string: " << s << '\n'; },
    [](double d){ std::cout << "double: " << d << '\n'; }
}, v);

This pattern keeps your visitor code concise and readable.


6. Performance Considerations

  • In‑place Storage: std::variant keeps all alternatives in a single buffer. If alternatives are large, consider using pointers or std::unique_ptr.
  • Small-Object Optimization: For small types, the overhead is minimal; the variant is typically as fast as a union with manual tag handling.
  • Exception Safety: std::variant guarantees no resource leaks when the active alternative is swapped or destroyed, as long as the contained types are themselves exception‑safe.

7. Bottom Line

std::variant gives you a clean, type‑safe way to handle values that can be one of several types. It replaces many ad‑hoc approaches (tagged unions, unions with enums, or error‑code integers) and integrates seamlessly with C++’s pattern‑matching via std::visit. Embrace it for safer, clearer code when you need a sum type.

Happy coding!

**Exploring the Power of std::variant in Modern C++17**

When C++ evolved from a language dominated by raw pointers and manual memory management to one that embraces type safety and expressive abstractions, std::variant emerged as a key player in the type-safe union family. Introduced in C++17, std::variant allows you to store one of several specified types in a single variable, eliminating the dangers of void* or boost::variant without external dependencies. This article takes a deep dive into std::variant: its design principles, practical use cases, common pitfalls, and how it integrates seamlessly with other modern C++ features such as std::visit, std::optional, and structured bindings.


1. What is std::variant?

std::variant is a discriminated union—a compile‑time construct that can hold a value of one of several types. Unlike std::any, which stores type-erased values, std::variant retains compile‑time type information, enabling static checks and type‑safe visitation.

#include <variant>
#include <string>
#include <iostream>

std::variant<int, std::string> v = 42;     // holds an int
v = std::string("hello");                  // now holds a string

The variant keeps a discriminant (an index) indicating the active type, and uses perfect forwarding to construct and destroy the contained object.

1.1 Value‑Semantic Guarantees

std::variant behaves like a value type: it supports copy/move construction and assignment, and its copy/move operations are exception‑safe. When copying, each alternative type’s copy constructor is invoked; if any throws, the original remains unchanged.


2. Common Operations

Operation Function Description
Construct variant<Ts...>(Ts&&...) Construct directly from a value of one of the alternatives.
Access `std::get
(variant)orstd::get(variant)| Retrieve the contained value, throwingstd::bad_variant_access` if the wrong type is requested.
Index variant.index() Get the active type index.
Alternative Count variant::index() Compile‑time constant indicating the number of alternatives.
Visitor std::visit(visitor, variant) Apply a callable to the active value.
Swap v1.swap(v2) Swap two variants.
Default variant() Default constructs to the first alternative if it is default‑constructible.

2.1 The get Function

`std::get

` retrieves the stored value of type `T`. It is a compile‑time safe way to fetch the data, but beware of the exception if the variant holds a different type: “`cpp try { std::string s = std::get(v); } catch (const std::bad_variant_access&) { std::cerr << "Variant does not hold a string.\n"; } “` Alternatively, `std::get_if (&v)` returns a pointer or `nullptr` if the type is not active, enabling safer checks. — ## 3. Visitation: The Heart of Variant Processing Visitation is where `std::variant` shines. A visitor is any callable that accepts each alternative type. `std::visit` applies the visitor to the active alternative, invoking the appropriate overload. “`cpp struct Visitor { void operator()(int i) const { std::cout << "int: " << i << '\n'; } void operator()(const std::string& s) const { std::cout << "str: " << s << '\n'; } }; std::variant v = 99; std::visit(Visitor{}, v); “` ### 3.1 Lambda Visitors For quick inline visitors, use a lambda: “`cpp std::visit([](auto&& arg){ using T = std::decay_t; if constexpr (std::is_same_v) std::cout << "int: " << arg << '\n'; else if constexpr (std::is_same_v) std::cout << "str: " << arg << '\n'; }, v); “` The `if constexpr` guard allows branching based on the concrete type at compile time. ### 3.2 Overloaded Lambdas A common pattern uses a helper to combine multiple lambdas: “`cpp template struct overloaded : Ts… { using Ts::operator()…; }; template overloaded(Ts…) -> overloaded; std::visit(overloaded{ [](int i) { std::cout << "int: " << i << '\n'; }, [](const std::string& s) { std::cout << "str: " << s << '\n'; } }, v); “` This creates a single visitor with overloads for each alternative. — ## 4. Integrating Variant with Other Modern C++ Features ### 4.1 Combining with std::optional Sometimes you want an optional value that can be of several types. Wrap a variant inside an optional: “`cpp std::optional<std::variant> maybe = std::variant{42}; “` Visiting becomes: “`cpp if (maybe) { std::visit([](auto&& val){ std::cout << val; }, *maybe); } “` ### 4.2 Structured Bindings with std::variant C++17’s structured bindings can unpack the index and value: “`cpp auto [index, value] = std::visit([](auto&& val){ return std::make_tuple(std::type_index(typeid(val)), val); }, v); “` But this requires the visitor to return a pair. A more common use is to pair the variant with its index for debugging: “`cpp auto index = v.index(); if (index == 0) std::cout << "int\n"; else if (index == 1) std::cout << "string\n"; “` — ## 5. Common Pitfalls and How to Avoid Them | Pitfall | Why It Happens | Fix | |———|—————-|—–| | **Throwing `bad_variant_access`** | Accessing the wrong type. | Use `std::get_if` or `std::holds_alternative`. | | **Copying a Non‑Copyable Type** | Storing `std::unique_ptr` in a variant. | Use `std::unique_ptr` as an alternative; ensure the variant is move‑only. | | **Infinite Recursion** | Visitor that returns a variant containing itself. | Avoid recursive variants unless using indirection (e.g., `std::shared_ptr`). | | **Performance Overheads** | Unnecessary heap allocation due to large types. | Prefer small types or use `std::variant`; ensure alternatives are small. | — ## 6. Real‑World Use Cases ### 6.1 Expression Trees “`cpp struct Expr; using ExprPtr = std::shared_ptr ; struct IntLiteral { int value; }; struct Add { ExprPtr left, right; }; using Node = std::variant; struct Expr { Node node; }; “` Visiting evaluates or prints the expression: “`cpp int evaluate(const Expr& e) { return std::visit(overloaded{ [](const IntLiteral& lit){ return lit.value; }, [](const Add& a){ return evaluate(*a.left) + evaluate(*a.right); } }, e.node); } “` ### 6.2 Event Handling in GUIs “`cpp using Event = std::variant; void handleEvent(const Event& e) { std::visit(overloaded{ [](const KeyEvent& k){ /* process key */ }, [](const MouseEvent& m){ /* process mouse */ }, [](const ResizeEvent& r){ /* process resize */ } }, e); } “` — ## 7. Future Directions With C++20, `std::variant` gained `operator==` and `operator` overloads, making it trivially comparable if all alternatives support comparison. C++23 introduced `std::variant::apply` for more ergonomic visitation. The library continues to evolve, offering better integration with ranges, coroutines, and template metaprogramming. — ## 8. Conclusion `std::variant` is a powerful, type‑safe alternative to union-like structures. By coupling it with visitation, modern C++ programmers can write clear, maintainable code that handles multiple types without sacrificing safety or performance. Whether you’re building expression trees, event systems, or generic data containers, `std::variant` provides the expressive toolkit you need. Embrace it, and let your code be both type‑safe and expressive.</std::variant

How to Properly Use std::shared_ptr with Custom Deleters

When you work with dynamic resources in C++, the standard library’s smart pointers are your best friends. std::shared_ptr offers reference‑counted ownership, but its default deleter only knows how to delete raw pointers. In real applications you often need to free resources in a custom way—for example, closing a file handle, releasing a DirectX texture, or invoking a C API cleanup function. A custom deleter can be supplied either at construction time or via std::shared_ptr::reset. Below is a step‑by‑step guide to creating and using a std::shared_ptr with a custom deleter, along with some common pitfalls and performance considerations.

1. Defining a Custom Deleter

A deleter is any callable object that takes a pointer of the same type that the shared_ptr manages. The simplest form is a lambda:

auto fileDeleter = [](FILE* f) {
    if (f) {
        std::fclose(f);
        std::printf("File closed.\n");
    }
};

If the resource needs more context, you can wrap the deleter in a struct:

struct TextureDeleter {
    void operator()(ID3D11Texture2D* tex) const {
        if (tex) {
            tex->Release();          // DirectX specific
            std::printf("Texture released.\n");
        }
    }
};

2. Constructing the shared_ptr

Pass the custom deleter to the constructor:

std::shared_ptr <FILE> filePtr(
    std::fopen("example.txt", "r"),
    fileDeleter   // custom deleter
);

Or if you already have a raw pointer:

FILE* rawFile = std::fopen("example.txt", "r");
std::shared_ptr <FILE> filePtr(rawFile, fileDeleter);

For types that require allocation via a factory function:

auto createTexture = []() -> ID3D11Texture2D* {
    // ... create texture ...
    return tex;
};

std::shared_ptr <ID3D11Texture2D> texPtr(
    createTexture(),
    TextureDeleter()
);

3. Using reset to Switch Resources

You can replace the managed object while keeping the same shared_ptr:

filePtr.reset(std::fopen("another.txt", "w"), fileDeleter);

The old resource will be freed automatically before the new one is set.

4. Avoiding Common Mistakes

Mistake Why it’s a problem Fix
Using a lambda that captures by reference Captured references may dangle if the lambda outlives the captured objects. Capture by value or use a stateless deleter.
Forgetting to check for null Some APIs may return null; the deleter must handle it gracefully. if (ptr) { /* ... */ }.
Mixing new/delete with C functions Using delete on a pointer obtained from malloc or fopen is UB. Ensure the deleter matches the allocation method.
Ignoring constexpr deleter overhead Some compilers emit small inline code, but others may incur a function pointer call. For trivial deleters, use a lambda that can be inlined; otherwise accept the minor overhead.

5. Performance Considerations

std::shared_ptr stores the control block (reference counts and deleter) in a separate allocation. When the deleter is a type with a non‑empty size, it’s stored inside the control block. This means that if you pass a lambda that captures data, the size of the control block increases, potentially causing more heap traffic.

For high‑performance or low‑memory‑footprint scenarios, consider:

  • Using std::unique_ptr when ownership is exclusive; it stores the deleter inline with the pointer.
  • Preallocating control blocks using std::make_shared to reduce fragmentation.
  • Avoiding unnecessary dynamic allocation by using custom allocators for the control block if you have a specialized memory pool.

6. Example: Managing a Custom Resource

Below is a full example that shows how to wrap a hypothetical C API that allocates and frees a Widget object.

// C API
typedef struct Widget Widget;
Widget* widget_create(int size);
void widget_destroy(Widget* w);

struct WidgetDeleter {
    void operator()(Widget* w) const {
        if (w) {
            widget_destroy(w);
            std::printf("Widget destroyed.\n");
        }
    }
};

int main() {
    // Create a shared_ptr that owns a Widget
    std::shared_ptr <Widget> widgetPtr(
        widget_create(42),
        WidgetDeleter()
    );

    // Use the widget
    // widgetPtr->do_something();

    // When widgetPtr goes out of scope, widget_destroy is called automatically.
}

7. Summary

  • A custom deleter lets std::shared_ptr manage any kind of resource, not just raw pointers.
  • Provide the deleter at construction or via reset; ensure it matches the allocation method.
  • Handle null pointers and avoid dangling captures in lambdas.
  • For performance‑critical code, consider the control block size and possibly use unique_ptr or custom allocators.

With these guidelines, you can confidently employ std::shared_ptr to manage diverse resources in modern C++ programs.

Exploring the Power of Coroutines in C++20

Coroutines, introduced in C++20, are a game‑changing feature that allows you to write asynchronous and lazy‑evaluation code in a natural, sequential style. Unlike callbacks or std::future, coroutines keep the state of a function across suspension points, enabling you to pause and resume execution with minimal overhead.

1. Basic Syntax

A coroutine function is marked with the co_await, co_yield, or co_return keywords. Here’s a minimal example:

#include <coroutine>
#include <iostream>

struct generator {
    struct promise_type;
    using handle_t = std::coroutine_handle <promise_type>;

    struct promise_type {
        int current_value;
        generator get_return_object() { return {handle_t::from_promise(*this)}; }
        std::suspend_always initial_suspend() { return {}; }
        std::suspend_always final_suspend() noexcept { return {}; }
        std::suspend_always yield_value(int v) {
            current_value = v;
            return {};
        }
        void return_void() {}
    };

    handle_t coro;
    explicit generator(handle_t h) : coro(h) {}
    ~generator() { coro.destroy(); }
    int next() {
        coro.resume();
        return coro.promise().current_value;
    }
};

generator count_to(int n) {
    for (int i = 1; i <= n; ++i)
        co_yield i;
}

Using the generator:

int main() {
    auto gen = count_to(5);
    for (int i = 0; i < 5; ++i) {
        std::cout << gen.next() << ' ';
    }
    // Output: 1 2 3 4 5
}

2. Practical Use Cases

  • Lazy Streams: Generate a sequence of values on demand without allocating a container.
  • Async IO: Wrap non‑blocking IO in a coroutine, letting the compiler manage the state machine.
  • Co‑routines in Game Loops: Represent game entity behaviors as coroutines that pause on events.

3. Under the Hood

When the compiler encounters a coroutine, it automatically transforms the function into a state machine. The promise_type encapsulates the coroutine’s state. co_await suspends the coroutine until the awaited expression becomes ready, while co_yield returns control to the caller and stores the yielded value.

The transformation preserves local variables across suspensions, making coroutines efficient. The only runtime cost is the allocation of the coroutine frame, which can be stack‑based or heap‑based depending on the promise’s initial_suspend and final_suspend behavior.

4. Advanced Features

  • Custom Awaiters: Define your own await_ready, await_suspend, and await_resume to integrate with networking libraries or GUI event loops.
  • co_return with Values: Coroutines can return values; just use co_return with a value and adjust the promise_type to store the return type.
  • Co‑operation: Combine multiple coroutines using co_await to orchestrate complex workflows.

5. Performance Considerations

  • Avoid Excessive Suspending: Each suspension introduces a tiny overhead. For tight loops, prefer loops over coroutines.
  • Stack Allocation: Use initial_suspend as std::suspend_always to allocate the coroutine frame on the heap only if necessary.
  • Inlining: The compiler can inline trivial coroutines, eliminating overhead.

6. Future Directions

C++23 extends coroutine support with std::task and better integration with the standard library. The community is actively developing coroutine‑aware containers like std::ranges::views::iota with lazy evaluation.

Coroutines in C++20 open a new paradigm for writing cleaner, more maintainable asynchronous code. By mastering their syntax and underlying mechanics, developers can unlock performance gains and more expressive code structures.

**如何在C++中实现一个线程安全的懒加载单例模式?**

在多线程环境下实现线程安全的懒加载单例是一个常见的需求。下面将演示几种常见的方法,并说明它们的优缺点。


1. 基于 std::call_once 的实现

#include <mutex>

class Singleton {
public:
    static Singleton& instance() {
        std::call_once(initFlag, [](){
            instancePtr = new Singleton();
        });
        return *instancePtr;
    }

    // 禁止复制和移动
    Singleton(const Singleton&) = delete;
    Singleton& operator=(const Singleton&) = delete;
    Singleton(Singleton&&) = delete;
    Singleton& operator=(Singleton&&) = delete;

private:
    Singleton() = default;
    ~Singleton() = default;

    static Singleton* instancePtr;
    static std::once_flag initFlag;
};

Singleton* Singleton::instancePtr = nullptr;
std::once_flag Singleton::initFlag;

优点

  • 代码简洁,易于理解。
  • std::call_once 已在标准库中实现,经过充分测试。
  • 确保一次且仅一次初始化,无论多少线程访问。

缺点

  • 需要手动管理单例生命周期(如在程序结束时手动删除,或使用智能指针)。
  • 仅适用于 C++11 及以上。

2. 双重检查锁定(Double-Check Locking)

#include <mutex>

class Singleton {
public:
    static Singleton& instance() {
        if (instancePtr == nullptr) {
            std::lock_guard<std::mutex> lock(mtx);
            if (instancePtr == nullptr) {
                instancePtr = new Singleton();
            }
        }
        return *instancePtr;
    }

    // 复制/移动禁用
    Singleton(const Singleton&) = delete;
    Singleton& operator=(const Singleton&) = delete;

private:
    Singleton() = default;
    ~Singleton() = default;

    static Singleton* instancePtr;
    static std::mutex mtx;
};

Singleton* Singleton::instancePtr = nullptr;
std::mutex Singleton::mtx;

优点

  • 只在第一次访问时加锁,后续访问不需要加锁,性能较好。

缺点

  • 需要保证内存模型中的可见性;在旧编译器或弱内存模型下可能产生未定义行为。
  • 代码更复杂,容易出错。

3. 使用 std::shared_ptr + std::once_flag

如果你希望单例在程序结束时自动销毁,可结合 std::shared_ptr

#include <memory>
#include <mutex>

class Singleton {
public:
    static std::shared_ptr <Singleton> instance() {
        std::call_once(initFlag, [](){
            instancePtr = std::shared_ptr <Singleton>(new Singleton());
        });
        return instancePtr;
    }

private:
    Singleton() = default;
    ~Singleton() = default;

    static std::shared_ptr <Singleton> instancePtr;
    static std::once_flag initFlag;
};

std::shared_ptr <Singleton> Singleton::instancePtr = nullptr;
std::once_flag Singleton::initFlag;

优点

  • 自动管理内存,程序结束时自动析构。
  • std::call_once 的结合保证了线程安全。

缺点

  • 需要注意 shared_ptr 的循环引用问题(如果单例中持有自身引用)。

4. C++17 的 inline static 变量

自 C++17 起,类内声明 inline static 变量可以在头文件中定义,且在每个翻译单元中仅产生一次实例:

class Singleton {
public:
    static Singleton& instance() {
        static Singleton instance; // C++11 之后线程安全
        return instance;
    }

private:
    Singleton() = default;
    ~Singleton() = default;

    // 禁止复制/移动
    Singleton(const Singleton&) = delete;
    Singleton& operator=(const Singleton&) = delete;
    Singleton(Singleton&&) = delete;
    Singleton& operator=(Singleton&&) = delete;
};

优点

  • 极其简洁,利用了 C++11 的“局部静态变量初始化线程安全”保证。
  • 不需要手动销毁,生命周期由程序结束决定。

缺点

  • 只能在 C++11 及以上使用。
  • 需要在 instance() 内部使用局部静态变量,若类构造函数有复杂逻辑,可能导致初始化顺序问题。

5. 小结

  • std::call_once + once_flag:最安全、最易维护的方式,适合大多数场景。
  • 双重检查锁定:性能略好,但实现复杂,需要注意内存模型。
  • std::shared_ptr:适合需要自动销毁的场景,避免手动 delete
  • C++17 的 inline static:最简洁,利用编译器保证线程安全。

在实际项目中,推荐使用 std::call_once,因为它既简洁又安全;若你正在使用 C++17 并且不需要在单例中管理复杂资源,可以直接使用 inline static 的方式。


**C++ 23 中的协程:从基础到实践**

协程(coroutine)是 C++20 开始正式加入标准的一项强大功能,它为异步编程和生成器提供了一种简洁、可组合的语法。在 C++23 中,协程的实现进一步成熟,提供了更高效的状态机生成、可移植的调度器支持以及与标准库容器的深度集成。本文将从协程的概念入手,介绍其关键语法、典型使用场景以及在 C++23 中的新特性,并给出一段完整的演示代码,帮助你快速上手。


1. 协程的核心概念

术语 说明
协程函数 co_await, co_yield, co_return 的函数,返回类型为 std::experimental::generator, std::future, std::task
挂起点 co_await, co_yield, co_return 触发协程暂停并返回控制权
恢复点 当外部再次请求协程时,从挂起点继续执行
协程句柄 std::experimental::coroutine_handle,用于控制协程的生命周期(resume, destroy 等)
协程 Promise 每个协程都有一个 Promise 对象,用于携带返回值、异常以及协程状态

注意:协程在 C++23 中仍属于实验性特性,官方通过 <experimental/coroutine><coroutine> 实现;在实际项目中应使用对应的实现库(如 libcoro 或编译器自带实现)以获得稳定性。


2. 基本语法

#include <coroutine>
#include <iostream>
#include <vector>

struct Task {
    struct promise_type {
        Task get_return_object() { return {}; }
        std::suspend_never initial_suspend() { return {}; }
        std::suspend_always final_suspend() noexcept { return {}; }
        void return_void() {}
        void unhandled_exception() {}
    };
};

Task hello() {
    std::cout << "Hello, ";
    co_await std::suspend_always{};
    std::cout << "world!\n";
}
  • promise_type 定义了协程的行为:initial_suspendfinal_suspend 决定协程启动和结束时是否挂起。
  • co_await 用于挂起并等待异步结果;co_yield 用于生成器返回值;co_return 用于结束并返回值。

3. C++23 新特性

新特性 说明
std::generator 直接支持生成器语法,无需手动实现 Promise
std::task 轻量级异步任务,兼容 std::futurestd::shared_future
协程调度器 std::experimental::schedulers 提供基于事件循环的调度器实现
co_await 的异构支持 可以对任何实现 await_transform 的类型进行挂起
协程的复制 通过 std::experimental::coroutine_handle::move 实现协程句柄的移动语义

这些改进使得协程的使用更加自然,减少模板魔法,并为高并发编程提供了更好的性能。


4. 示例:协程 + 线程池 + HTTP 请求

下面的代码演示如何使用 C++23 的协程与线程池实现一个简单的异步 HTTP 客户端。我们使用 asio(Boost.Asio 或 standalone Asio)来处理网络 IO,并用协程包装异步操作。

#include <asio.hpp>
#include <iostream>
#include <coroutine>
#include <thread>
#include <vector>

using asio::ip::tcp;

// 简单的协程包装器
struct Awaitable {
    asio::ip::tcp::socket& socket;
    std::size_t size;
    char* data;
    asio::error_code ec = asio::error::make_error_code(asio::error::operation_aborted);

    struct promise_type {
        Awaitable get_return_object() {
            return {nullptr, 0, nullptr};
        }
        std::suspend_never initial_suspend() noexcept { return {}; }
        std::suspend_always final_suspend() noexcept { return {}; }
        void return_void() {}
        void unhandled_exception() { std::rethrow_exception(std::current_exception()); }
    };

    Awaitable operator co_await() const noexcept {
        return *this;
    }

    bool await_ready() const noexcept { return false; }

    void await_suspend(std::coroutine_handle<> h) {
        socket.async_read_some(
            asio::buffer(data, size),
            [h](asio::error_code ec, std::size_t /*bytes*/) mutable {
                if (ec) {
                    std::cerr << "Read error: " << ec.message() << '\n';
                }
                h.resume();
            });
    }

    void await_resume() {}
};

// 异步 HTTP GET
asio::awaitable<std::string> async_get(asio::io_context& io, const std::string& host, const std::string& path) {
    auto executor = co_await asio::this_coro::executor;
    tcp::resolver resolver(executor);
    auto endpoints = co_await resolver.async_resolve(host, "http", asio::use_awaitable);

    tcp::socket socket(executor);
    co_await asio::async_connect(socket, endpoints, asio::use_awaitable);

    std::string request = "GET " + path + " HTTP/1.1\r\nHost: " + host + "\r\n\r\n";
    co_await asio::async_write(socket, asio::buffer(request), asio::use_awaitable);

    std::string response;
    char buf[1024];
    for (;;) {
        std::size_t n = co_await socket.async_read_some(asio::buffer(buf), asio::use_awaitable);
        if (n == 0) break;
        response.append(buf, n);
    }
    co_return response;
}

// 线程池
struct ThreadPool {
    asio::io_context io;
    std::vector<std::thread> workers;

    ThreadPool(std::size_t n = std::thread::hardware_concurrency()) {
        workers.reserve(n);
        for (std::size_t i = 0; i < n; ++i)
            workers.emplace_back([this] { io.run(); });
    }

    ~ThreadPool() { io.stop(); for (auto& t : workers) t.join(); }
};

int main() {
    ThreadPool pool;
    auto [response] = asio::co_spawn(pool.io, async_get(pool.io, "example.com", "/"), asio::detached);
    // 此时协程已在线程池中执行,主线程可继续其他工作
    std::cout << "Fetched " << response.size() << " bytes\n";
    return 0;
}

说明

  1. 线程池:使用 asio::io_contextrun() 在多个线程中调度协程。
  2. 异步 async_get:使用 asio::awaitable 作为返回类型,内部调用 async_* 操作并通过 co_await 让协程挂起。
  3. co_spawn:启动协程并将其绑定到 io_contextasio::detached 表示不等待结果(可改为 use_future 获取 std::future)。

5. 常见坑 & 性能优化

问题 解决方案
协程句柄泄漏 明确在 final_suspend 后调用 destroy(),或者使用 std::suspend_always 并在 return_object() 中返回句柄。
异常传播 promise_type::unhandled_exception 默认重新抛出,若想自定义可在此处记录日志。
堆栈开销 协程状态机在堆上分配,若频繁创建请使用 co_yield 生成器或 std::generator
线程安全 多线程访问共享数据时仍需同步;协程仅保证挂起点与恢复点的顺序。
调试难度 使用 -fsanitize=address -fsanitize=undefined-fdiagnostics-color=always 可以帮助定位问题。

6. 小结

  • C++23 的协程让异步代码写得更像同步代码,极大提升可读性与可维护性。
  • asio 等异步库配合使用,可实现高性能网络应用、游戏循环或 GUI 事件驱动。
  • 需要注意的是协程本身是轻量级的,但真正的 IO 仍由底层事件循环或线程池完成。

通过掌握协程的基本语法与 C++23 的新特性,你可以在自己的项目中快速引入高并发、低延迟的异步机制,从而提升整体性能与开发效率。祝你编码愉快!


Exploring C++20 Modules: A Modern Approach to Build Systems

C++20 introduced a groundbreaking feature—modules—that promises to reshape how developers write, compile, and maintain large-scale C++ applications. This article dives into the basics of modules, their advantages over traditional header inclusion, and practical tips for integrating them into existing build systems.

What Are Modules?

Modules are a language feature that replaces the conventional preprocessor-based header inclusion mechanism. They allow you to declare a module interface and module implementation, effectively bundling code into self-contained units that the compiler can treat as single, cohesive objects.

Key Concepts

  • Module Interface Unit (export module): Declares the public API that other translation units can import.
  • Module Implementation Unit (module): Contains the implementation details that remain hidden from consumers.
  • Exported Entities: Only those marked with export are visible outside the module.

Benefits Over Header-Only Design

Aspect Header-Only Modules
Compilation Time Recompiled with every inclusion Only compiled once per module
Symbol Visibility Risk of ODR violations Strong encapsulation enforced
Preprocessor Overhead Requires include guards / pragma once Eliminates preprocessor directives
Build System Complexity Simple but brittle for large codebases Requires modern build tools (CMake 3.20+, Bazel)

Getting Started with Modules in CMake

  1. Enable C++20

    set(CMAKE_CXX_STANDARD 20)
    set(CMAKE_CXX_STANDARD_REQUIRED ON)
  2. Define a Module Source
    math/module.cpp

    export module math;
    export int add(int a, int b) { return a + b; }
  3. Compile the Module

    add_library(math MODULE math/module.cpp)
    target_compile_options(math PRIVATE "$<$<COMPILE_LANGUAGE:CXX>:-fmodules-ts>")
  4. Consume the Module
    main.cpp

    import math;
    #include <iostream>
    int main() {
        std::cout << add(3, 4) << '\n';
    }
  5. Build

    cmake -S . -B build -G Ninja
    cmake --build build

Practical Tips

  • Avoid Mixing export and non-export: Keep module interfaces lean.
  • Use -fprebuilt-module-path: Share precompiled module files across teams.
  • Leverage Interface Libraries: Combine modules with CMake’s INTERFACE libraries for headers that remain header-only.
  • Keep Build Systems Updated: Older compilers (GCC < 10, Clang < 11) lack full module support.

Common Pitfalls

Issue Fix
“unknown import” Ensure compiler flags -fmodules-ts or /std:c++latest are set
“module not found” Verify module name matches exactly (case-sensitive)
“linker errors” Compile module as a shared library or static library, then link with consumers

Future Outlook

Modules are poised to become the new standard for large-scale C++ projects. They promise faster compile times, safer interfaces, and cleaner codebases. As compiler vendors mature module support, we can expect more robust tooling, better diagnostics, and tighter integration with package managers like Conan or vcpkg.

Bottom Line

Embracing C++20 modules is not just a language upgrade—it’s a strategic shift in how we build and maintain C++ software. By adopting modules early, teams can unlock significant productivity gains and reduce long-term technical debt.

**如何在 C++17 中实现线程安全的单例模式?**

在多线程环境下,单例模式的实现必须保证仅有一个实例,并且在并发创建时不产生竞争条件。C++17 提供了更简洁、更安全的实现方法。下面我们从理论讲解、代码实现和常见陷阱三个方面展开。


1. 理论基础

  1. 单例约束

    • 唯一性:全局范围内只存在一个实例。
    • 延迟初始化:实例在第一次使用时创建。
    • 线程安全:多线程同时访问时不会产生未定义行为。
  2. C++17 的关键特性

    • std::call_once / std::once_flag:一次性执行,保证线程安全。
    • std::unique_ptr:自动管理生命周期。
    • std::mutexstd::scoped_lock:简洁锁管理。
    • 初始化顺序规则:函数内的静态局部变量在第一次进入时初始化,且是线程安全的(C++11 起)。

2. 代码实现

方案一:函数内静态局部变量(最简单、最安全)

#include <iostream>
#include <mutex>

class Logger {
public:
    static Logger& instance() {
        static Logger inst;   // C++11 起保证线程安全
        return inst;
    }

    void log(const std::string& msg) {
        std::scoped_lock lock(mtx_);
        std::cout << "[LOG] " << msg << '\n';
    }

private:
    Logger() = default;
    Logger(const Logger&) = delete;
    Logger& operator=(const Logger&) = delete;

    std::mutex mtx_;
};

优点

  • 代码最短。
  • 依赖标准库,完全线程安全。
  • 对象在第一次使用时创建,随后所有线程共享同一实例。

缺点

  • 无法在析构时做额外清理(除非手动注册 atexit)。
  • 若需要在程序早期销毁,需更复杂的手段。

方案二:std::call_oncestd::unique_ptr

#include <iostream>
#include <memory>
#include <mutex>

class Config {
public:
    static Config& instance() {
        std::call_once(initFlag_, []() {
            inst_.reset(new Config());
        });
        return *inst_;
    }

    void set(const std::string& key, const std::string& value) {
        std::scoped_lock lock(mtx_);
        config_[key] = value;
    }

    std::string get(const std::string& key) const {
        std::scoped_lock lock(mtx_);
        auto it = config_.find(key);
        return it != config_.end() ? it->second : "";
    }

private:
    Config() = default;
    ~Config() = default;  // 允许自动析构

    std::unordered_map<std::string, std::string> config_;
    mutable std::mutex mtx_;

    static std::once_flag initFlag_;
    static std::unique_ptr <Config> inst_;
};

std::once_flag Config::initFlag_;
std::unique_ptr <Config> Config::inst_;

优点

  • 明确初始化顺序,可在 inst_nullptr 时手动销毁。
  • 可在构造函数中执行复杂逻辑。

缺点

  • 需要手动维护静态成员变量。
  • 代码稍显冗长。

3. 常见陷阱与解决方案

案件 说明 解决方案
饿汉式单例 在程序启动即创建实例,可能导致未使用也占用资源 采用懒加载或 std::call_once 延迟初始化
复制构造/赋值 未删除导致出现多实例 在类中删除拷贝构造和赋值运算符
析构顺序 静态对象销毁顺序不确定,导致访问已销毁的单例 使用 std::call_oncestd::unique_ptr 并显式释放
多线程初始化竞争 在旧 C++ 标准或自定义实现中可能出现 使用 std::call_once 或 C++11+ 静态局部变量,确保线程安全

4. 小结

  • 推荐:在 C++17 中,最简洁且最安全的做法是使用函数内静态局部变量。
  • 细粒度控制:若需要更细致的生命周期管理或定制初始化/销毁过程,std::call_oncestd::unique_ptr 提供了足够的灵活性。
  • 最佳实践:始终删除拷贝/赋值操作符,避免多实例;使用 std::mutexstd::scoped_lock 保护内部可变状态;在多线程环境下验证单例是否确实只有一个实例。

通过以上实现,你可以在任何 C++17 项目中安全、可靠地使用单例模式。

Optimizing Memory Usage in Modern C++ with Smart Pointers

在现代 C++(C++11 及之后的版本)中,手动管理内存已不再是唯一选择。智能指针通过 RAII(资源获取即初始化)模式自动释放资源,减少泄漏风险,同时允许开发者专注于业务逻辑。本文将深入探讨几种常用智能指针(std::unique_ptrstd::shared_ptrstd::weak_ptr)的使用场景、性能考虑以及与容器、回调函数和多线程场景结合时的最佳实践。

1. std::unique_ptr——拥有独占资源的首选

std::unique_ptr 维护一个对象的独占所有权。它不支持复制,只有移动语义,使得对象生命周期更可预测。典型使用场景包括:

  • 工厂函数:返回动态分配的对象时,用 unique_ptr 防止泄漏。
  • 资源包装:如文件句柄、网络连接等,仅在单线程中使用。
  • 自定义删除器:通过第二模板参数传递自定义删除函数,兼容非 new/delete 的资源。

性能细节

  • 对象大小:仅存储指针和删除器,额外内存开销极低。
  • 内联构造:现代编译器可以将 unique_ptr 内联化,避免堆栈拷贝成本。
  • 对齐:与裸指针相同,对齐需求一致。

典型代码

std::unique_ptr <File> createFile(const std::string& path) {
    FILE* fp = fopen(path.c_str(), "r");
    if (!fp) throw std::runtime_error("File open failed");
    return std::unique_ptr <File>(new File(fp));
}

2. std::shared_ptr——共享所有权的安全方案

std::shared_ptr 通过引用计数实现共享所有权,适用于多对象共享同一资源的情况,如图形引擎中的纹理、缓存层中的共享数据等。其关键特性:

  • 线程安全:引用计数的递增/递减是原子操作。
  • 可循环引用:使用 std::weak_ptr 解决循环引用导致的内存泄漏。

性能影响

  • 计数器维护:每次 shared_ptr 复制/销毁都会涉及 atomic 操作,稍微增加开销。
  • 分配开销shared_ptr 的计数器通常与对象共用一次 std::make_shared 的分配,减少碎片。

实战建议

  • 避免不必要的共享:如果对象不需要多方共享,优先使用 unique_ptr
  • 使用 std::make_shared:一次性分配对象和计数器,性能更优。

3. std::weak_ptr——避免循环引用的守门员

std::weak_ptr 只持有弱引用,不会增加引用计数。它是解决 shared_ptr 循环引用的关键工具。典型用法:

class Node {
public:
    std::shared_ptr <Node> next;
    std::weak_ptr <Node> prev; // 防止循环引用
};

通过 lock() 将弱引用提升为共享引用,如果对象已被销毁,则返回 nullptr

4. 与 STL 容器结合

  • std::vector<std::unique_ptr<T>>:容器内存不再共享,但可以通过 std::make_unique<T> 创建对象。
  • std::vector<std::shared_ptr<T>>:允许容器内元素共享同一资源,适合需要共享引用的场景。

注意:当容器元素被复制或移动时,智能指针会自动管理引用计数。

5. 与异步和回调函数配合

在异步回调中,使用 std::shared_ptr 作为捕获对象可以保证对象在回调执行期间仍然存在:

auto self = shared_from_this();
asyncOperation([self](Result r){ self->handle(r); });

如果不需要共享引用,可考虑 std::unique_ptr 搭配 std::move,但需确保回调不再引用对象。

6. 常见陷阱与最佳实践

陷阱 解决方案
shared_ptr 产生循环引用 使用 weak_ptr 断开循环
对裸指针使用 shared_ptr 构造 `std::shared_ptr
sp(rawPtr, deleter)`,但需小心重复删除
频繁分配导致碎片 使用 make_shared 或自定义内存池
多线程中未使用 shared_ptr weak_ptrlock() 不是原子性,使用 shared_ptr 进行同步

7. 总结

智能指针是 C++ 现代内存管理的基石。掌握 unique_ptrshared_ptrweak_ptr 的语义与性能特性,结合 STL 容器、异步模型与多线程环境,可以写出安全、可读且高效的代码。记住:尽量使用 RAII,避免手动 new/delete,并在设计之初评估是否需要共享所有权

如何在C++20中使用模块(Modules)优化编译速度?

在 C++20 标准中,模块(Modules)被引入以解决传统头文件带来的重编译和链接时间过长的问题。相比头文件,模块提供了更强的抽象、可维护性和编译加速。本文将从概念、设计、使用方法和实战技巧四个角度,系统地阐述如何在项目中引入并使用模块,进而显著提升编译速度。

1. 传统头文件的痛点

  • 重复编译:每个包含头文件的翻译单元(TUs)都需要编译一次头文件,导致编译时间成倍增长。
  • 编译顺序依赖:由于宏定义和包含顺序影响编译结果,代码易出现难以定位的编译错误。
  • 接口暴露:头文件往往暴露实现细节,导致任何实现变化都会触发大量重新编译。

2. 模块的核心理念

  • 模块化单元(Module Interface Unit):相当于头文件的“模块化版”,只需一次编译,生成一个 .ifc(interface file)。
  • 模块实现单元(Module Implementation Unit):与传统源文件类似,但内部可使用 export 关键字暴露接口。
  • 导入语法:使用 import module_name; 取代 #include "header.h"

2.1 关键特性

特性 说明
export 明确声明哪些符号对外可见,提升编译器可分析性
import 与传统 #include 对比,消除了预处理阶段
编译缓存 编译器将模块接口编译结果保存为 .ifc,后续使用直接加载

3. 典型模块文件结构

// math.module
export module math; // 模块接口单元声明

export double add(double a, double b);
export double sub(double a, double b);

// math.cpp
module math; // 模块实现单元

export double add(double a, double b) { return a + b; }
export double sub(double a, double b) { return a - b; }

// main.cpp
import math; // 导入模块

int main() {
    double x = add(3.5, 4.2);
    double y = sub(9.0, 1.1);
    return 0;
}

3.1 编译命令

# 编译模块接口单元
g++ -std=c++20 -c math.cpp -o math.o
# 编译模块实现单元
g++ -std=c++20 -c main.cpp -o main.o
# 链接
g++ math.o main.o -o app

注意:编译接口单元时,编译器会生成一个 math.ifc 文件。后续编译任何导入此模块的源文件时,编译器会直接使用该 .ifc,避免重复编译。

4. 编译加速技巧

技巧 解释
按需导入 只导入必要的模块,减少接口加载
分层模块 将低耦合功能拆分为小模块,复用更高层模块
预编译模块 在 CI 或构建服务器上预编译公共模块,缓存 .ifc 供全局使用
并行构建 现代构建工具(CMake、Ninja)支持并行编译,模块化可更好利用

5. 与旧代码兼容

  • 混合编译:可以在同一项目中同时使用模块和传统头文件。编译器会自动处理两者。
  • 包装头文件:通过 export module wrapper; import "old_header.h"; 将旧头文件包装成模块,逐步迁移。

6. 案例:使用 Boost 模块化

Boost 官方已经为 C++20 发布了模块化版本。使用时,只需在 CMakeLists.txt 中添加:

add_library(boost_math MODULE boost_math.cpp)
target_compile_features(boost_math PRIVATE cxx_std_20)

然后在用户代码中:

import boost.math;

7. 常见坑及排查

  1. 模块名冲突:确保模块名唯一,避免与标准库模块冲突。
  2. 编译器不支持:某些编译器(如 GCC < 10)尚未完整实现 C++20 模块。请使用较新版本。
  3. 头文件未被转为模块:若仍使用 #include,编译器会报 cannot import module。请检查 -fmodule-name-fmodules-cache-path 参数。

8. 总结

  • 模块通过一次编译生成接口文件,显著减少重复编译成本。
  • 通过 export 明确可见符号,提升编译器可分析度,进一步优化编译。
  • 与旧头文件兼容性好,易于渐进式迁移。
  • 结合并行构建和缓存机制,可将大型项目的编译时间从数分钟降低到十几秒甚至更少。

建议从项目中挑选最频繁被导入的公共库(如数学、日志、网络)开始迁移为模块,并逐步扩展到整个代码基。随着编译速度的提升,开发效率和持续集成速度也会同步提升。