**How to Implement a Generic Lazy Evaluation Wrapper in C++17?**

Lazy evaluation, also known as delayed computation, postpones the execution of an expression until its value is actually needed. This technique can reduce unnecessary work, improve performance, and enable elegant functional‑style patterns in C++. In this article we design a reusable, type‑agnostic Lazy wrapper that works with any callable, automatically caches the result, and supports thread‑safe evaluation on demand.


1. Design Goals

Feature Reason
Generic over return type `Lazy
should work for anyT`.
Callable‑agnostic Accept std::function, lambdas, function pointers, or member functions.
Automatic memoization Store the computed value the first time it is requested.
Thread‑safe Ensure only one thread computes the value, others wait.
Move‑only Avoid copying large result objects unnecessarily.
Zero‑overhead if unused If the value is never requested, no allocation occurs.

2. Implementation

#pragma once
#include <functional>
#include <memory>
#include <mutex>
#include <optional>

template <typename T>
class Lazy {
public:
    // Construct from any callable that returns T.
    template <typename Callable,
              typename = std::enable_if_t<
                  std::is_invocable_r_v<T, Callable>>>
    explicit Lazy(Callable&& func)
        : factory_(std::make_shared<std::function<T()>>(
              std::forward <Callable>(func)))
        , ready_(false) {}

    // Retrieve the value, computing it on first access.
    T& get() {
        std::call_once(flag_, [this] { compute(); });
        return *value_;
    }

    // Implicit conversion to T.
    operator T&() { return get(); }

    // Accessor for const context.
    const T& get() const {
        std::call_once(flag_, [this] { compute(); });
        return *value_;
    }

private:
    void compute() {
        if (!factory_) return; // Should not happen
        value_ = std::make_shared <T>((*factory_)());
        // Release the factory to free memory if desired.
        factory_.reset();
    }

    std::shared_ptr<std::function<T()>> factory_;
    mutable std::shared_ptr <T> value_;
    mutable std::once_flag flag_;
    mutable bool ready_;
};

Explanation

  1. Constructor – Accepts any callable convertible to T(). The callable is stored in a std::shared_ptr<std::function<T()>>. Using shared_ptr keeps the factory alive until the first call.
  2. get()std::call_once guarantees that compute() runs exactly once, even under concurrent access. The computed value is stored in a `shared_ptr `, enabling cheap copies when needed.
  3. Memoization – After the first call, factory_ is reset, freeing the lambda’s captured state.
  4. Thread‑safetyonce_flag ensures that only one thread invokes the factory. Other threads block until the value is ready.

3. Usage Examples

3.1 Basic Lazy Integer

Lazy <int> lazySum([]{ return 3 + 5; });

std::cout << "Sum: " << lazySum.get() << '\n';   // Computes 8 on first access

3.2 Lazy File Reading

Lazy<std::string> fileContent([]{
    std::ifstream file("data.txt");
    std::stringstream buffer;
    buffer << file.rdbuf();
    return buffer.str();
});

// The file is read only when needed.
if (!fileContent.get().empty()) {
    std::cout << "File size: " << fileContent.get().size() << '\n';
}

3.3 Thread‑safe Lazy Singleton

struct HeavySingleton {
    HeavySingleton() { /* expensive construction */ }
    void doWork() { /* ... */ }
};

Lazy <HeavySingleton> singleton([]{ return HeavySingleton(); });

// Multiple threads can safely use the singleton.
std::thread t1([]{ singleton.get().doWork(); });
std::thread t2([]{ singleton.get().doWork(); });
t1.join(); t2.join();

4. Performance Considerations

Metric Best Case Worst Case
First access cost O(factory execution) O(factory execution + memory allocation)
Subsequent access O(1) – dereference O(1) – dereference
Memory Only factory until first call; minimal afterwards value_ stored, factory freed

Because the factory is stored only until first use, the wrapper introduces virtually no overhead when the value is never needed. After evaluation, the lambda’s captured variables are discarded, freeing memory.


5. Extending the Wrapper

  1. Cache invalidation – Add a reset() method that clears the cached value and accepts a new callable.
  2. Weak memoization – Store `std::weak_ptr ` to allow the value to be reclaimed if memory pressure rises.
  3. Async evaluation – Replace std::call_once with a std::future to compute lazily in a background thread.

6. Conclusion

The `Lazy

` wrapper demonstrates how modern C++17 features can create a clean, reusable, and thread‑safe lazy evaluation mechanism. It abstracts away the boilerplate of memoization and offers a declarative style of programming: simply provide a factory, and the wrapper takes care of delayed, single‑execution semantics. This pattern is particularly useful in performance‑critical applications where expensive resources (files, network data, heavy computations) should only be materialized on demand.

发表评论