Defensive programming under DDD architecture: 5 major levels jointly ensure the validity of business data

1. Rule verification is the basis for accuracy

Rule verification is an important means of ensuring business stability. Through rule verification, the correctness and compliance of the system or business logic can be verified and ensured, and potential errors and problems can be avoided. The omission of rules is often accompanied by the emergence of online bugs.

I believe every developer has faced the following situations:

  1. The non-null judgment of the input parameters is not carried out, resulting in a NullPointerException (NPE) when executing the logic;

  2. User permissions are not correctly verified, resulting in unauthorized operations. Ordinary users can also perform the operations, ultimately causing security issues;

  3. When data is stored in the database, integrity verification is not performed, resulting in invalid data being stored;

  4. In the business logic, exceptions that may be thrown are not properly handled, causing the system to fail to operate normally;

It can be seen that verification is extremely important to the process, and unreasonable input can cause serious business problems. At the same time, the impact of incorrect data is much greater than imagined:

  1. It may cause the entire writing process to be abnormally interrupted;

  2. After incorrect data is entered into the database, it will cause fatal damage to all read operations;

  3. The data in the upstream system is incorrect, and the downstream systems “collapse” one after another;

2. Defensive programming

How to avoid the above situation from happening, the answer lies in defensive programming.

Defensive programming (Defensive Programming) is a software development method that aims to predict possible exceptions and error conditions in the code and handle them with appropriate measures, thereby improving the robustness and stability of the software. Through defensive programming, software developers can avoid and reduce unpredictable behavior and adverse effects caused by program errors when the software functions are relatively complex, and ensure the correctness and stability of the software during deployment and operation. Improve software reliability and security.

The core idea of defensive programming is to try to consider all possible exceptions and error conditions in the code, and to handle these exceptions and error conditions accordingly in the code. For example, you can use the exception catching mechanism to handle possible exceptions, make full use of code comments and constraints to standardize input data, use assertions to check preconditions and postconditions in the code, etc.

The concept is too complicated, simple understanding: defensive programming is:

  1. Do not trust any input. The validity of the parameters must be ensured before formal use;

  2. Do not believe in any processing logic. After the process is processed, you must ensure that the business rules are still valid;

Be suspicious of the input parameters, the prerequisites for business execution, and the results of business execution, which will greatly improve the accuracy of the system!

3. Exception interrupt or return value interrupt?

In rule verification scenarios, exceptions are used first to interrupt the process.

3.1. Abnormal interrupt is standard configuration

In programming languages that do not provide exceptions, we can only use special return values to represent exceptions. Such a design will mix the normal process and the exception handling process, making the language lose readability. For example, in C, -1 or NULL is generally used to indicate abnormal situations, so the first thing to do when calling a function is to determine whether the result is NULL or -1, such as the following code:

void readFileAndPrintContent(const char* filename) {
    FILE* file = fopen(filename, "r");
    if (file == NULL) {
        //The file cannot be opened and an exception status is returned.
        fprintf(stderr, "Failed to open the file.\
");
        return; // Return directly to indicate an exception occurs
    }

    char line[256];
    while (fgets(line, sizeof(line), file) != NULL) {
        printf("%s", line);
    }

    fclose(file);
}

In the Java language, a complete exception mechanism is introduced to better handle exceptions. This mechanism has the following characteristics:

  1. Logical separation separates normal processing and exception processing. The exception mechanism can separate error handling code from normal business logic, making the code clearer and easier to read. At the same time, exception handling code can be centralized for easy understanding and maintenance;

  2. Exception propagation and catching. When an exception is thrown in a method, you can choose to catch and handle the exception in the current method, or continue to propagate the exception to the caller until a suitable exception handler is found. This flexible exception propagation mechanism allows errors to be handled appropriately without causing program interruption;

  3. Exception information delivery. In the exception object, Java provides rich information to describe the cause and context of the exception. Including exception category, exception message, exception location, etc. This information can help developers quickly locate and fix exceptions, and improve code debugging and maintenance efficiency;

Exception handling in Java becomes simple and rigorous:

import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;

public class FileReadExample {
    public static void readFileAndPrintContent(String filename) throws IOException {
        try (BufferedReader reader = new BufferedReader(new FileReader(filename))) {
            String line;
            while ((line = reader.readLine()) != null) {
                System.out.println(line);
            }
        }
    }

    public static void main(String[] args) {
        try {
            readFileAndPrintContent("example.txt");
        } catch (IOException e) {
            System.err.println("Exception occurred: " + e.getMessage());
            System.exit(-1); // Return error code -1 to indicate an exception occurred
        }
    }
}

In daily business development, when something does not meet business expectations, the process can be interrupted directly through exceptions.

3.2. Immediate interruption or staged interruption?

When an unexpected situation occurs, should an exception be thrown directly, or should the exception be thrown after completing the entire stage? This depends on the business scenario!

In the parameter verification scenario, all illegal information needs to be collected, and then exceptions are thrown uniformly, so that users can see all problem information at a glance to facilitate unified modification.

In business scenarios, when rules are not met, exceptions need to be interrupted directly to avoid damage to subsequent processes.

4. Rule verification in the standard writing process

When developing using DDD, a standard writing process includes:

08cd0fb1ba4dfdf30e0724e70057c1d1.png

image

Among them, five major categories of rule verification are involved, such as:

  1. Parameter verification. Perform basic verification on input parameters, such as whether they are null, whether their types match, etc.;

  2. Business verification. Specifically refers to the verification of front-end business rules or resources, such as whether the inventory is sufficient, whether the product status is available for sale, etc.;

  3. Status verification. Specifically refers to the status verification within the aggregation, and the core is the business state machine. For example, only when the order status is pending payment, the payment can be successfully performed;

  4. Fixed rule verification. If there are fixed rules in the aggregation, the rules need to be verified before the persistence operation is performed. For example, the order payment amount = the sum of the selling prices of all products – the sum of various discounts;

  5. Storage engine verification. Final guarantee based on storage engine features, such as creating a unique index in the table to avoid multiple submissions by users (idempotent protection);

4.1. Parameter verification

This is the most basic verification, without too many business concepts, only simple parameters. Its purpose is to verify the data format.

For this general ability, priority is given to using frameworks. Commonly used frameworks mainly include:

  1. validation framework. Mainly used to solve the verification of simple attributes;

  2. Verifiable + AOP. Mainly used for verification of multiple attributes;

4.1.1. Validation framework

For single-attribute validation, you can use the hibernate validation framework. Hibernate Validation is a verification framework based on the Java Bean Validation specification (Java Bean Validation). It provides a series of features to implement verification and constraints on the data model. Its features mainly include:

  1. Provides a set of validation annotations for adding validation constraints to fields, method parameters, return values, etc. of the data model. For example, @NotNull is used to verify that the field cannot be empty, @Email is used to verify the email format, @Size is used to verify the string length, etc. ;

  2. Many built-in validation annotations are provided to implement common validation requirements. For example, @NotBlank is used to verify that the string is not empty and must contain at least one non-blank character, @Pattern is used to verify that the string matches the specified regular expression, @Min and @Max are used to verify the minimum and maximum values of numbers, etc.;

  3. In addition to using built-in validation annotations, Hibernate Validation also allows developers to define and apply custom validation rules through custom constraint annotations. By creating custom constraint annotations, you can implement more flexible validation rules that meet business needs;

  4. Allows grouping of multiple validation constraints and validating specific validation groups when required. In this way, verification operations can be selectively performed according to specific scenarios, thereby achieving more fine-grained verification control;

  5. Core classes such as validator and validation context are provided for performing validation operations and obtaining validation results. The validator is responsible for performing verification operations, and the verification context provides rich methods to obtain verification results, obtain verification error information, etc.;

  6. Allows you to control the execution order of validation by specifying the validation order of validation annotations. This ensures that when validation constraints need to be executed in order, each constraint will be validated in the specified order;

  7. Supports internationalization and can provide multi-language support for validation error messages according to different locales. Developers can define and configure resource files for verification error messages to achieve cross-language verification error messages;

There are many features. The most common one we use is to add corresponding functional annotations to model fields, method parameters, and return values. For example, adding relevant verification annotations to CreateOrderCmd to avoid handwritten code:

@Data
public class CreateOrderCmd {
    @NotNull
    private Long userId;

    @NotNull
    private Long productId;

    @NotNull
    @Min(1)
    private int count;
}
4.1.2. Verifiable + AOP

Some parameter validation may be complex and require judgment on multiple attributes. In this case, the Validation framework will be powerless.

Of course, corresponding specifications can be formulated to uniformly provide a validate method on the parameter-encapsulated class, and call this method after entering the method and before using the parameters. However, when standards are implemented by people, legacy will inevitably occur. So, the best solution is to internalize it into the framework. As shown below:

563011894ca22401025161c80933a650.png

image
  1. First, define an interface Verifiable, which has only one validate method;

  2. Secondly, define a ValidateIntercepter class to judge the input parameters based on the pre-interceptor. If the Verifiable interface is implemented, the validate method will be automatically called;

  3. Finally, a Proxy is generated based on AOP technology to complete unified parameter verification;

When multiple parameters need to be verified, you only need to implement the validate method of the Verifiable interface, and there is no need to call validate manually.

4.2. Business verification

Business verification is the precondition verification for business logic execution, including external verification and control condition verification.

Under normal circumstances, business verification is relatively complex and the frequency of changes is relatively high, so the scalability requirements are very high. However, the business rules themselves are relatively independent and do not have much dependence on each other. In order to better cope with logical expansion, you can use the strategy model for design. As shown below:

5c208f81089db4dc9ca08b01dba34ad4.png

image
4.2.1. Business validator

The business validator is the strategy interface in the strategy model.

The core code is as follows:

public interface BaseValidator<A> extends SmartComponent<A> {
    void validate(A a, ValidateErrorHandler validateErrorHandler);

    default void validate(A a){
        validate(a, ((name, code, msg) -> {
            throw new ValidateException(name, code, msg);
        }));
    }
}

The interface is very simple:

  1. Provide a unified validate method definition;

  2. Inherited from SmartComponent, you can use boolean support(A a) to verify whether the component can be processed;

4.2.2. Shared Data Context

After having a unified policy interface, the Context mode needs to be used to manage input parameters. Context can be a simple data container or an enhanced container with LazyLoad capability. Its core function is to share data between multiple strategies.

For example, CreateOrderContext in the order process is defined as follows:

@Data
public class CreateOrderContext implements CreateOrderContext{

    private CreateOrderCmd cmd;

    @LazyLoadBy("#{@userRepository.getById(cmd.userId)}")
    private User user;

    @LazyLoadBy("#{@productRepository.getById(cmd.productId)}")
    private Product product;

    @LazyLoadBy("#{@addressRepository.getDefaultAddressByUserId(user.id)}")
    private Address defAddress;

    @LazyLoadBy("#{@stockRepository.getByProductId(product.id)}")
    private stock stock;

    @LazyLoadBy("#{@priceService.getByUserAndProduct(user.id, product.id)}")
    private Price price;
}

Among them, @LazyLoadBy is a function-enhanced annotation. When accessing the getter method of the attribute for the first time, data loading will be automatically triggered and the loaded data will be set to the attribute. When accessed for the second time, the required data will be obtained directly from the attribute. data.

[Note] If you are interested in this part, you can learn “Command\ &Query Object and Context Mode”

4.2.3. Policy management

After having the strategy interface and shared data Context, the next step is to implement various implementation classes with high cohesion and low coupling according to business requirements. As shown below:

7878a715e9f3be5870fabb20bd4ae31d.png

image

How these components are managed is shown in the figure below:

b0bce276f451258614596d14c83dfd53.png

image
  1. First, at startup, all BusinessValidator instances will be injected by Spring into the validators collection in ValidateService;

  2. When calling the validateBusiness function, traverse the validators collection in sequence, find the validator instance that can handle the Context and execute the corresponding validate method;

The biggest advantage of doing this is to fully implement the “opening and closing principle” in the verification component:

  1. When adding new business logic, you only need to add a new Spring component, and the system will automatically complete the integration;

  2. When modifying a business check, only one class will be changed and will have no impact on other checks;

After careful consideration, you may find that this is actually a variation of the chain of responsibility model. However, because the implementation is very simple, it is used many times in the Spring framework.

4.3. Status verification

State verification is also called pre-state verification and is the most important part of business rules.

Core entities usually have a state attribute, and the values of the state attribute together form a standard state machine. As shown below:

b6faf0a092840fb1c4b5b40c11ef0ae4.png

image

This is a state machine for an order entity, which defines the conversion relationship between states. This is the most important part of the domain design. When a business action occurs, the first thing to do is not to modify the business status, but to determine whether the operation can be performed in the current status.

For example, the core business of successful payment:

public void paySuccess(PayByIdSuccessCommand paySuccessCommand){
  if (getStatus() != OrderStatus.CREATED){
       throw new OrderStatusNotMatch();
   }

   this.setStatus(OrderStatus.PAID);

   PayRecord payRecord = PayRecord.create(paySuccessCommand.getChanel(), paySuccessCommand.getPrice());
   this.payRecords.add(payRecord);

   OrderPaySuccessEvent event = new OrderPaySuccessEvent(this);
   this.events.add(event);
}

Before entering the logic processing, the status is first judged. Only “created” can receive the successful payment operation and convert the status to “paid”.

4.4. Fixed rule verification

There are not many usage scenarios for fixed rule verification, but it is powerful and can solve logical errors from the root cause.

There are a large number of amount operations on the order entity, such as:

  1. coupon. After the user uses the coupon, the user’s payment amount will be reduced by the discount amount, and the discount amount will also be evenly distributed to different line items;

  2. Promotions. It is basically the same as the impact of coupons on orders, but the scenario will be more complex;

  3. Offers stack. Coupons and promotions can be used together to modify orders;

  4. Manual price change. After the merchant and the user reach an agreement, the merchant can modify the order amount in the background;

After the order amount changes, there are many fields to update, but no matter what the change is, a formula needs to be met: payment amount = sum of sales amount – sum of discount amount.

Based on this formula, we can verify the rules after the business operation and before the database is updated. Once the rules are not satisfied, it means there is a problem with the processing logic and an exception will be thrown directly to interrupt the processing process.

4.4.1. JPA support

JPA supports callbacks to business methods before data is saved or updated.

We can use callback annotations or entity listeners to complete business callbacks.

@PreUpdate
@PrePersist
public void checkBizRule(){
    //Perform business verification
}

Add @PreUpdate and @PrePersist to the checkBizRule method. Before saving or updating the database, the framework automatically calls back the checkBizRule method. When the method throws an exception, the processing flow is forcibly interrupted.

You can also use entity listeners for processing, as shown in the following example:

//First, define an OrderListenrer
public class OrderListener {
    @PrePersist
    public void preCreate(Order order) {
        order.checkBiz();
    }

    @PostPersist
    public void postCreate(Order order) {
        order.checkBiz();
    }
}

//Add relevant configuration on the Order entity
@Data
@Entity
@Table
@Setter(AccessLevel.PRIVATE)
//Configure OrderListener
@EntityListeners(OrderListener.class)
public class Order implements AggRoot<Long> {
    // Omit some non-critical code
    public void checkBizRule(){
        //Perform business verification
    }
}
4.4.2. MyBatis support

MyBatis’s life cycle support for entities is not as powerful as JPA, but this function can still be achieved through Intercepter. The specific operations are as follows:

First, customize the Intercepter, determine the parameters and call the rule verification method:

@Intercepts({
        @Signature(type = Executor.class, method = "update", args = {MappedStatement.class, Object.class})
})
public class EntityInterceptor implements Interceptor {
    @Override
    public Object intercept(Invocation invocation) throws Throwable {
        Object[] args = invocation.getArgs();
        MappedStatement statement = (MappedStatement) args[0];
        Object parameter = args[1];

        //Here you can judge the parameters and perform corresponding operations
        if (parameter instanceof Order) {
            Order order = (Order) parameter;
            order.checkBizRule();
        }

        return invocation.proceed();
    }

    @Override
    public Object plugin(Object target) {
        return Plugin.wrap(target, this);
    }

    @Override
    public void setProperties(Properties properties) {
        // You can set some configuration parameters here
    }
}

Then, add the Intercepter configuration in the mybatis-config.xml configuration file, as follows:

<configuration>
    <!-- Other configurations -->

    <plugins>
        <plugin interceptor="com.example.EntityInterceptor"/>
    </plugins>
</configuration>
4.4.3. Business framework extension

The Lego framework encapsulates the standard Command processing process, and supports fixed rule verification in the process. As shown below:

6811bc2b6b000067e8c1bbb5da1d83b2.png

image

The validateRule in ValidateService is automatically called during the fixed rule verification phase in the standard writing process. The overall structure is basically the same as the business verification, so I won’t go into details here. in:

  1. There is a default implementation of AggBasedRuleValidator, which can achieve the same effect as JPA and MyBatis by overriding the validate method on the aggregate root;

  2. You can also customize your own RuleValidator and inject the implementation class into the Spring container to complete the integration with the business process;

4.5. Storage engine verification

The storage engine provides very rich data verification, such as Not Null, Length, Unique, etc.;

Under normal circumstances, all verification rules must pass before the process reaches the storage engine. Try not to use the storage engine as a fallback solution. But there is a very special situation that only the storage engine can accomplish elegantly, and that is unique key protection.

For example, when idempotent protection is required, we usually set the idempotent key as a unique index to ensure that duplicate submissions will not occur.

5. Summary

In order to ensure that dirty data (data that does not meet business expectations) does not enter the system, we have taken the idea of “defensive programming” to the extreme and set up a total of 5 levels in a standard writing process to analyze the data from multiple dimensions and perspectives. To ensure:

  1. Parameter verification. Do not trust any input information and strictly verify the system input;

  2. Business verification. Business operations often rely on some preconditions, and these preconditions add up to even more complexity than the core operations. How to provide an isolated and scalable design becomes the core of this stage;

  3. Status verification. Protect the state machine. Any operation has a pre-state and is not allowed to be executed in an illegal state. This is the problem to be solved at this stage;

  4. Fixed rule verification. When the upper-layer business changes frequently and is complex, some fixed rules may be damaged. Therefore, after the business operation is completed and before the data operation, the fixed rules can be verified again;

  5. Storage engine verification. Verification should be completed in Java code first, and storage engine verification should not be used as normal verification. However, in terms of uniqueness guarantee, storage engine is the simplest and most effective strategy;

Only the joint efforts of the five major levels can truly ensure the effectiveness of business data.

Back-end exclusive technology group

To build a high-quality technical exchange community, HR personnel engaged in programming development and technical recruitment are welcome to join the group. Everyone is also welcome to share their own company’s internal information, help each other, and make progress together!

Speech in a civilized manner, focusing on communication technology, recommendation of positions, and industry discussion

Advertisers are not allowed to enter, and do not trust private messages to prevent being deceived.

c84319a910c5f2f215f9b85c5550819d.png

Add me as a friend and bring you into the group
The knowledge points of the article match the official knowledge files, and you can further learn related knowledge. Java Skill TreeHomepageOverview 136,739 people are learning the system