1. Learning Objectives1.1. Master Tomcat architecture design and principles to improve internal skillsMacro View As an " Microscopic view The popular microservices today also follow this idea, splitting the monolithic application into "microservices" according to their functions. The commonalities are extracted during the splitting process, and these commonalities will become the core basic services or general libraries. The same is true of the "middle platform" idea. Design patterns are often a powerful tool for encapsulating changes. Reasonable use of design patterns can make our code and system design elegant and neat. This is the "internal strength" that can be gained by learning excellent open source software. It will never become outdated, and the design ideas and philosophy contained therein are the fundamental way. Learn from their design experience, reasonably use design patterns to encapsulate changes and constants, and draw experience from their source code to improve your own system design capabilities. 1.2. Macro understanding of how a request is connected to SpringIn the course of work, we are already very familiar with Java syntax, and have even "memorized" some design patterns and used many Web frameworks, but we rarely have the opportunity to use them in actual projects. Designing a system independently seems to be just implementing one Service at a time according to the needs. I don't seem to have a panoramic view of Java Web development in my mind. For example, I don't know how the browser request is connected to the code in Spring. In order to break through this bottleneck, why not stand on the shoulders of giants to learn excellent open source systems and see how the big guys think about these problems. After studying the principles of Tomcat, I found that 1.3. Improve your system design capabilitiesWhen learning Tomcat, I also found that I used a lot of advanced Java technologies, such as Java multi-threaded concurrent programming, Socket network programming, and reflection. Before, I only knew about these technologies and memorized some questions for interviews. But I always feel that there is a gap between "knowing" and being able to use it. By studying the Tomcat source code, I learned what scenarios to use these technologies. There is also system design capability, such as interface-oriented programming, component-based combination mode, skeleton abstract class, one-click start and stop, object pool technology and various design patterns, such as template method, observer mode, chain of responsibility mode, etc. Later, I began to imitate them and apply these design ideas to actual work. 2. Overall Architecture DesignToday we will analyze the design ideas of Tomcat step by step. On the one hand, we can learn the overall architecture of Tomcat, learn how to design a complex system from a macro perspective, how to design top-level modules, and the relationship between modules; on the other hand, it also lays the foundation for our in-depth study of the working principles of Tomcat. Tomcat startup process:
Tomcat implements two core functions:
Therefore, Tomcat is designed with two core components: Connector and Container. The connector is responsible for external communication, and the container is responsible for internal processing In order to support multiple
Each component has a corresponding life cycle and needs to be started, and its internal subcomponents must also be started. For example, a Tomcat instance contains a Service, and a Service contains multiple connectors and a container. A container contains multiple Hosts, and there may be multiple Contex t containers inside the Host, and a Context may also contain multiple Servlets, so Tomcat uses the composite mode to manage each component and treats each component as a single group. Overall, the design of each component is like a "Russian doll". 2.1 Connectors Before I start talking about connectors, let me first lay the groundwork for the various The
The application layer protocols supported by Tomcat are:
So one container may dock with multiple connectors. The connector shields the The functional requirements of the refined connector are:
After the requirements are clearly listed, the next question we need to consider is, what sub-modules should the connector have? Excellent modular design should consider high cohesion and low coupling.
We found that connectors need to complete three highly cohesive functions:
Therefore, the designers of Tomcat designed three components to implement these three functions, namely The I/O model of network communication is changing, and the application layer protocol is also changing, but the overall processing logic remains unchanged. 2.2 Encapsulation changes and invariance Therefore, Tomcat designed a series of abstract base classes to encapsulate these stable parts. The abstract base class This is the application of Template Method design pattern. In summary, the three core components of the connector, ProtocolHandler component: It mainly handles network connections and application layer protocols. It includes two important components, EndPoint and Processor. The two components are combined to form ProtocoHandler. Let me introduce their working principles in detail. EndPoint: The Acceptor is used to monitor the Socket connection request. We know that the use of Java multiplexers is nothing more than two steps:
In Tomcat, LimitLatch is a connection controller that controls the maximum number of connections. The default value in NIO mode is 10,000. When this threshold is reached, the connection request is rejected. The essence of SocketProcessor implements the Runnable interface, in which The workflow is as follows: Processor: Processor is used to implement HTTP protocol. Processor receives Socket from EndPoint, reads byte stream and parses it into Tomcat Request and Response objects, and submits it to container for processing through Adapter. Processor is an abstraction of application layer protocol. From the figure, we can see that after EndPoint receives the Socket connection, it generates a SocketProcessor task and submits it to the thread pool for processing. The Run method of SocketProcessor calls the HttpProcessor component to parse the application layer protocol. After the Processor generates the Request object through parsing, it calls the Service method of the Adapter. The method passes the request to the container through the following code. // Calling the container connector.getService().getContainer().getPipeline().getFirst().invoke(request, response); Adapter component: Due to different protocols, Tomcat defines its own The solution of Tomcat designers is to introduce 2.3 Container The connector is responsible for external communication, and the container is responsible for internal processing. Specifically, the connector handles the analysis of Socket communication and application layer protocol to obtain Container: As the name implies, it is used to hold things, so the Tomcat container is used to load Tomcat has designed four containers: It should be noted that these four containers are not in a parallel relationship, but in a parent-child relationship, as shown in the following figure: You may ask, why do we need to design so many levels of containers? Doesn’t this increase complexity? In fact, the consideration behind this is that Tomcat uses a layered architecture to make the Servlet container very flexible. Because here it happens that one Host has multiple Contexts, and one Context also contains multiple Servlets, and each component requires unified life cycle management, so the combined mode designs these containers You can use the Tomcat configuration file to gain a deeper understanding of its hierarchical relationship. <Server port="8005" shutdown="SHUTDOWN"> // Top-level component, can contain multiple Services, represents a Tomcat instance<Service name="Catalina"> // Top-level component, contains an Engine, multiple connectors<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> // Connector // Container component: an Engine handles all Service requests, including multiple Hosts <Engine name="Catalina" defaultHost="localhost"> //Container component: processes client requests under the specified Host, which can contain multiple Contexts <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> //Container component: handles all client requests for a specific Context Web application <Context></Context> </Host> </Engine> </Service> </Server> How to manage these containers? We found that there is a parent-child relationship between containers, forming a tree structure. Is it possible to think of the combination pattern in the design pattern? Tomcat uses the combined mode to manage these containers. The specific implementation method is that all container components implement the public interface Container extends Lifecycle { public void setName(String name); public Container getParent(); public void setParent(Container container); public void addChild(Container child); public void removeChild(Container child); public Container findChild(String name); } We have seen methods such as 2.4. The process of requesting to locate the Servlet How is a request located to which The function of When a request comes in, If a user visits a URL, such as 1. First, determine the Service and Engine based on the protocol and port number. Tomcat's default HTTP connector listens on port 8080, and the default AJP connector listens on port 8009. The URL in the above example accesses port 8080, so the request will be received by the HTTP connector, and a connector belongs to a Service component, so the Service component is determined. We also know that in addition to multiple connectors, a Service component also has a container component, specifically an Engine container, so once the Service is determined, the Engine is also determined. 2. Select Host based on domain name. After the Service and Engine are determined, the Mapper component searches for the corresponding Host container through the domain name in the URL. For example, the domain name accessed by the URL in the example is 3. Find the Context component based on the URL path. After the Host is determined, the Mapper matches the path of the corresponding Web application according to the URL path. For example, in this example, the path accessed is /order, so the Context container Context4 is found. 4. Find the Wrapper (Servlet) based on the URL path. After the Context is determined, the Mapper finds the specific Wrapper and Servlet according to the Servlet mapping path configured in web.xml. The Adapter in the connector will call the container's Service method to execute the Servlet. The first container to receive the request is the Engine container. After the Engine container processes the request, it will pass the request to its child container Host for further processing, and so on. Finally, the request will be passed to the Wrapper container, and the Wrapper will call the final Servlet for processing. So how is this calling process implemented? The answer is to use the Pipeline-Valve pipeline. public interface Valve { public Valve getNext(); public void setNext(Valve valve); public void invoke(Request request, Response response) } Continue to look at the Pipeline interface public interface Pipeline { public void addValve(Valve valve); public Valve getBasic(); public void setBasic(Valve valve); public Valve getFirst(); } There is In fact, each container has a Pipeline object. As long as the first Valve of this Pipeline is triggered, all the Valves in This is because there is also a The whole process is triggered by @Override public void service(org.apache.coyote.Request req, org.apache.coyote.Response res) { // Omit other code // Calling the container connector.getService().getContainer().getPipeline().getFirst().invoke( request, response); ... } The last Valve of the Wrapper container will create a Filter chain and call Didn’t we talk about
Lifecycle Earlier we saw that How to uniformly manage the creation, initialization, start, stop and destruction of components? How to make the code logic clear? How to add or remove components easily? How to ensure that components are started and stopped without omission or duplication? One-touch start and stop: LifeCycle interface Design is about finding the changing and unchanging points of the system. The invariant point here is that each component must go through the processes of creation, initialization, and startup, and these states and state transformations remain unchanged. The change is that the initialization method of each specific component, that is, the startup method is different. Therefore, Tomcat abstracts the invariant points into an interface, which is related to the life cycle and is called LifeCycle. The LifeCycle interface defines several methods: In Scalability: LifeCycle Events Let's consider another issue, which is the scalability of the system. Because the specific implementation of The following is the definition of Reusability: LifeCycleBase abstract base class See the Abstract Template design pattern again. With the interface, we need to use classes to implement the interface. Generally speaking, there is more than one implementation class, and different classes often have some of the same logic when implementing the interface. If each subclass is required to implement it, there will be duplicate code. How can subclasses reuse this logic? In fact, it is to define a base class to implement common logic, and then let each subclass inherit it to achieve the purpose of reuse. Tomcat defines a base class LifeCycleBase to implement the LifeCycle interface, and puts some common logic into the base class, such as the transition and maintenance of life states, the triggering of life events, and the addition and deletion of listeners, etc., while the subclass is responsible for implementing its own initialization, start and stop methods. public abstract class LifecycleBase implements Lifecycle{ //Hold all observers private final List<LifecycleListener> lifecycleListeners = new CopyOnWriteArrayList<>(); /** * Publish Event * * @param type Event type * @param data Data associated with event. */ protected void fireLifecycleEvent(String type, Object data) { LifecycleEvent event = new LifecycleEvent(this, type, data); for (LifecycleListener listener : lifecycleListeners) { listener.lifecycleEvent(event); } } // Template method defines the entire startup process and starts all containers @Override public final synchronized void init() throws LifecycleException { //1. Status check if (!state.equals(LifecycleState.NEW)) { invalidTransition(Lifecycle.BEFORE_INIT_EVENT); } try { //2. Trigger the listener for the INITIALIZING event setStateInternal(LifecycleState.INITIALIZING, null, false); // 3. Call the initialization method initInternal() of the specific subclass; // 4. Trigger the listener for the INITIALIZED event setStateInternal(LifecycleState.INITIALIZED, null, false); } catch (Throwable t) { ExceptionUtils.handleThrowable(t); setStateInternal(LifecycleState.FAILED, null, false); throw new LifecycleException( sm.getString("lifecycleBase.initFail",toString()), t); } } } In order to achieve one-click start and stop and elegant life cycle management, Tomcat takes into account scalability and reusability, and takes object-oriented thinking and design patterns to the extreme. If you need to maintain a bunch of entities with parent-child relationships, consider using the Composite pattern. The observer pattern sounds "high-end", but in fact it means that when an event occurs, a series of update operations need to be performed. A low-coupling, non-intrusive notification and update mechanism is implemented. 3. Why Tomcat breaks the parent delegation mechanism3.1. Parent Delegation We know that public Class<?> loadClass(String name) throws ClassNotFoundException { return loadClass(name, false); } protected Class<?> loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { // Find out if the class has been loaded Class<?> c = findLoadedClass(name); // If not loaded if (c == null) { // Delegate to the parent loader to load, recursively call if (parent != null) { c = parent.loadClass(name, false); } else { // If the parent loader is empty, find out whether Bootstrap has been loaded c = findBootstrapClassOrNull(name); } // If it still cannot be loaded, call your own findClass to load if (c == null) { c = findClass(name); } } if (resolve) { resolveClass(c); } return c; } } protected Class<?> findClass(String name){ //1. According to the passed class name, search for the class file in a specific directory and read the .class file into memory... //2. Call defineClass to convert the byte array into a Class object return defineClass(buf, off, len); } // Parse the bytecode array into a Class object and implement it with native methods protected final Class<?> defineClass(byte[] b, int off, int len){ ... } There are 3 class loaders in JDK, and you can also customize class loaders. Their relationship is shown in the figure below.
The working principle of these class loaders is the same, the difference is that their loading paths are different, that is, the paths searched by the 3.2. Tomcat hot loading Tomcat essentially performs periodic tasks through a background thread, regularly detecting changes in class files and reloading classes if any changes are found. Let's take a look at how protected class ContainerBackgroundProcessor implements Runnable { @Override public void run() { // Please note that the parameter passed in here is the instance of "host class" processChildren(ContainerBase.this); } protected void processChildren(Container container) { try { //1. Call the backgroundProcess method of the current container. container.backgroundProcess(); //2. Traverse all child containers and recursively call processChildren, // In this way, all descendants of the current container will be processed Container[] children = container.findChildren(); for (int i = 0; i < children.length; i++) { // Please note here that the container base class has a variable called backgroundProcessorDelay. If it is greater than 0, it means that the child container has its own background thread and there is no need for the parent container to call its processChildren method. if (children[i].getBackgroundProcessorDelay() <= 0) { processChildren(children[i]); } } } catch (Throwable t) { ... } Tomcat's hot loading is implemented in the Context container, mainly by calling the reload method of the Context container. Putting aside the details, from a macro perspective, the main tasks are as follows:
In this process, class loaders play a key role. A Context container corresponds to a class loader. When the class loader is destroyed, all the classes it has loaded will also be destroyed. During the startup process, the Context container creates a new class loader to load new class files. 3.3. Tomcat Class Loader Tomcat's custom class loader findClass method
For ease of understanding and reading, I removed some details: public Class<?> findClass(String name) throws ClassNotFoundException { ... Class<?> clazz = null; try { //1. First search for the class in the Web application directory clazz = findClassInternal(name); } catch (RuntimeException e) { throw e; } if (clazz == null) { try { //2. If not found in the local directory, let the parent loader search clazz = super.findClass(name); } catch (RuntimeException e) { throw e; } //3. If the parent class is not found, throw ClassNotFoundException if (clazz == null) { throw new ClassNotFoundException(name); } return clazz; } 1. First search for the class to be loaded in the local directory of the Web application. 2. If not found, it is handed over to the parent loader for search. Its parent loader is the system class loader 3. If the parent loader also cannot find the class, a loadClass method Let's look at the implementation of public Class<?> loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { Class<?> clazz = null; //1. First check in the local cache whether the class has been loaded clazz = findLoadedClass0(name); if (clazz != null) { if (resolve) resolveClass(clazz); return clazz; } //2. Check whether the class has been loaded from the system class loader's cache clazz = findLoadedClass(name); if (clazz != null) { if (resolve) resolveClass(clazz); return clazz; } // 3. Try to load the class using ExtClassLoader class loader, why? ClassLoader javaseLoader = getJavaseClassLoader(); try { clazz = javaseLoader.loadClass(name); if (clazz != null) { if (resolve) resolveClass(clazz); return clazz; } } catch (ClassNotFoundException e) { // Ignore } // 4. Try to search for the class in the local directory and load it try { clazz = findClass(name); if (clazz != null) { if (resolve) resolveClass(clazz); return clazz; } } catch (ClassNotFoundException e) { // Ignore } // 5. Try to use the system class loader (that is, AppClassLoader) to load try { clazz = Class.forName(name, false, parent); if (clazz != null) { if (resolve) resolveClass(clazz); return clazz; } } catch (ClassNotFoundException e) { // Ignore } } //6. The above processes all fail to load and throw an exception throw new ClassNotFoundException(name); } There are six main steps: 1. First check in the local cache whether the class has been loaded, that is, whether Tomcat's class loader has loaded this class. 2. If the Tomcat class loader has not loaded this class, check whether the system class loader has loaded it. 3. If none of them exists, let ExtClassLoader load it. This step is critical to prevent the Web application's own classes from overwriting the JRE's core classes. Because Tomcat needs to break the parent delegation mechanism, if a class called Object is customized in the Web application, if this Object class is loaded first, it will overwrite the Object class in the JRE. This is why Tomcat's class loader will try to use 4. If the 5. If the class does not exist in the local directory, it means that it is not a class defined by the Web application itself, and it will be loaded by the system class loader. Please note here that the Web application is handed over to the system class loader through 6. If all the above loading processes fail, a 3.4. Tomcat class loader hierarchy Tomcat, as a 1. Suppose we run two web applications in Tomcat, and there are 2. If two Web applications both depend on the same third-party JAR package, such as 3. Like the JVM, we need to isolate the classes of Tomcat itself and the classes of the Web application. 1. WebAppClassLoader Tomcat's solution is to customize a class loader 2. SharedClassLoader The essential requirement is how to share library classes between two web applications and not load the same class repeatedly. In the parent delegation mechanism, each child loader can load classes through the parent loader, so isn't it enough to put the classes that need to be shared under the loading path of the parent loader? Therefore, the designers of Tomcat added a class loader 3. CatalinaClassloader How to isolate Tomcat's own classes from the Web application's classes? Sharing can be done through a parent-child relationship, while isolation requires a brotherly relationship. A sibling relationship means that two class loaders are parallel and they may have the same parent loader. Based on this, Tomcat designs another class loader, There is a problem with this design. What should we do if some classes need to be shared between Tomcat and various Web applications? The old method is to add another 4. Summary of overall architecture design analysisThrough the previous study of Tomcat's overall architecture, we know what core components Tomcat has and the relationship between components. And how Tomcat handles an HTTP request. Let's review it through a simplified class diagram. From the diagram, you can see the hierarchical relationship of various components. The dotted line in the diagram represents the process of a request flowing through Tomcat. 4.1 Connectors The overall architecture of Tomcat consists of two core components: connector and container. The connector is responsible for external communication, and the container is responsible for internal processing. The connector uses the By studying the overall architecture of Tomcat, we can get some basic ideas for designing complex systems. First, we need to analyze the requirements and determine the sub-modules based on the principle of high cohesion and low coupling. Then, we need to find out the changing points and unchanging points in the sub-modules, use interfaces and abstract base classes to encapsulate the unchanging points, define template methods in the abstract base class, and let the subclasses implement the abstract methods themselves, that is, the specific subclasses implement the changing points. 4.2 ContainerThe combined mode is used to manage the container, and the startup events are published through the observer mode to achieve decoupling and open-closed principles. The skeleton abstract class and template method abstract changes and constants, and the changes are left to the subclasses to implement, thereby achieving code reuse and flexible expansion. Use the chain of responsibility approach to handle requests, such as logging. 4.3 Class Loader Tomcat's custom class loader 5. Practical application scenariosThe overall architecture design of Tomcat is briefly analyzed, from [connectors] to [containers], and the design ideas and design patterns of some components are explained in detail. The next step is how to apply what you have learned, and learn from the elegant design and apply it to actual work development. Learning begins with imitation. 5.1. Chain of Responsibility ModelAt work, there is a requirement that users can input some information and choose to check the company's [Industrial and Commercial Information], [Judicial Information], [China Registration Status], etc., one or more modules as shown below, and there are some common things between modules that need to be reused by each module. This is like a request, which will be processed by multiple modules. Therefore, we can abstract each query module into a processing valve, and use a List to save these valves. In this way, when adding a new module, we only need to add a new valve, realizing the open and close principle. At the same time, a bunch of verification codes are decoupled into different specific valves, and abstract classes are used to extract "unchanged" functions. The specific sample code is as follows: First, we abstract our processing valve. /** * Chain of responsibility pattern: handle each module valve */ public interface Valve { /** * Call * @param netCheckDTO */ void invoke(NetCheckDTO netCheckDTO); } Define abstract base classes to reuse code. public abstract class AbstractCheckValve implements Valve { public final AnalysisReportLogDO getLatestHistoryData(NetCheckDTO netCheckDTO, NetCheckDataTypeEnum checkDataTypeEnum){ // Get history records, code logic omitted} // Get the verification data source configuration public final String getModuleSource(String querySource, ModuleEnum moduleEnum){ // Omit code logic} } Define the business logic of each module, such as the processing of [Baidu negative news] @Slf4j @Service public class BaiduNegativeValve extends AbstractCheckValve { @Override public void invoke(NetCheckDTO netCheckDTO) { } } The last step is to manage the modules that users choose to check, and we save them through List. Used to trigger the required inspection module @Slf4j @Service public class NetCheckService { // Inject all valves @Autowired private Map<String, Valve> valveMap; /** * Send verification request * * @param netCheckDTO */ @Async("asyncExecutor") public void sendCheckRequest(NetCheckDTO netCheckDTO) { // Module valves used to save customer selection processing List<Valve> valves = new ArrayList<>(); CheckModuleConfigDTO checkModuleConfig = netCheckDTO.getCheckModuleConfig(); // Add the module selected by the user to the valve chain if (checkModuleConfig.getBaiduNegative()) { valves.add(valveMap.get("baiduNegativeValve")); } // Omit some code....... if (CollectionUtils.isEmpty(valves)) { log.info("The network inspection module is empty, there is no task to be inspected"); return; } // Trigger processing valves.forEach(valve -> valve.invoke(netCheckDTO)); } } 5.2 Template Method PatternThe requirement is to perform financial report analysis based on the financial report Excel data or company name entered by the customer. For non-listed products, parse Excel -> verify whether the data is legal -> perform calculations. Listed companies: Determine whether the name exists. If not, send an email and terminate the calculation -> Pull financial report data from the database, initialize the inspection log, generate a report record, trigger the calculation -> Modify the task status based on failure or success. The important "change" and "unchange"
The entire algorithm process is a fixed template, but the specific implementation of some changes within the algorithm needs to be deferred to different subclasses. This is the best scenario for the template method pattern. public abstract class AbstractAnalysisTemplate { /** * Submit financial report analysis template method and define the skeleton process * @param reportAnalysisRequest * @return */ public final FinancialAnalysisResultDTO doProcess(FinancialReportAnalysisRequest reportAnalysisRequest) { FinancialAnalysisResultDTO analysisDTO = new FinancialAnalysisResultDTO(); // Abstract method: submit legal verification boolean prepareValidate = prepareValidate(reportAnalysisRequest, analysisDTO); log.info("prepareValidate validation result = {} ", prepareValidate); if (!prepareValidate) { // Abstract method: Build the data needed for notification emails buildEmailData(analysisDTO); log.info("Build email information, data = {}", JSON.toJSONString(analysisDTO)); return analysisDTO; } String reportNo = FINANCIAL_REPORT_NO_PREFIX + reportAnalysisRequest.getUserId() + SerialNumGenerator.getFixLenthSerialNumber(); // Generate analysis log initFinancialAnalysisLog(reportAnalysisRequest, reportNo); // Generate analysis record initAnalysisReport(reportAnalysisRequest, reportNo); try { // Abstract method: pull financial report data, different subclasses implement FinancialDataDTO financialData = pullFinancialData(reportAnalysisRequest); log.info("Financial report data fetching completed, ready to perform calculations"); // Calculation indicators financialCalcContext.calc(reportAnalysisRequest, financialData, reportNo); // Set the analysis log to success successCalc(reportNo); } catch (Exception e) { log.error("An exception occurred in the financial report calculation subtask", e); // Set the analysis log to failCalc(reportNo); throw e; } return analysisDTO; } } Finally, create two subclasses to inherit the template and implement the abstract method. This decouples the processing logic of listed and non-listed types while reusing the code. 5.3 Strategy PatternThe requirement is to make an Excel interface that can universally identify bank statements. Assume that the standard statement contains fields such as [transaction time, income, expenditure, transaction balance, payer account number, payer name, payee name, payee account number]. Now we parse out the subscript of the excel header where each necessary field is located. But there are many situations of water flow: 1. One is to include all standard fields. 2. The subscripts of income and expenditure are in the same column, and income and expenditure are distinguished by positive and negative numbers. 3. Income and expenses are in the same column, with a transaction type field to distinguish them. 4. Special treatment for special banks. That is, we need to find the corresponding processing logic algorithm based on the corresponding subscript of the analysis. We may write a lot of At this time, we can use the strategy mode to process the pipelines of different templates with different processors, and find the corresponding strategy algorithm to process according to the template. Even if we add another type in the future, we only need to add a new processor, which has high cohesion, low coupling and is scalable. Define the processor interface and use different processors to implement the processing logic. Inject all processors into public interface DataProcessor { /** * Processing flow data * @param bankFlowTemplateDO Flow index data * @param row * @return */ BankTransactionFlowDO doProcess(BankFlowTemplateDO bankFlowTemplateDO, List<String> row); /** * Whether the template can be processed. Different types of flow strategies can determine whether parsing is supported based on the template data. * @return */ boolean isSupport(BankFlowTemplateDO bankFlowTemplateDO); } //Processor context @Service @Slf4j public class BankFlowDataContext { //Inject all processors into the map @Autowired private List<DataProcessor> processors; // Find the corresponding processor to process the pipeline public void process() { DataProcessor processor = getProcessor(bankFlowTemplateDO); for(DataProcessor processor : processors) { if (processor.isSupport(bankFlowTemplateDO)) { // row is a row of flow data processor.doProcess(bankFlowTemplateDO, row); break; } } } } Define the default processor to process normal templates. To add a new template, just add a new processor to implement /** *Default processor: facing the standard pipeline template* */ @Component("defaultDataProcessor") @Slf4j public class DefaultDataProcessor implements DataProcessor { @Override public BankTransactionFlowDO doProcess(BankFlowTemplateDO bankFlowTemplateDO) { // Omit the processing logic details return bankTransactionFlowDO; } @Override public String strategy(BankFlowTemplateDO bankFlowTemplateDO) { // Omit the judgment of whether the pipeline can be parsed boolean isDefault = true; return isDefault; } } Through the strategy pattern, we assign different processing logics to different processing classes, which is completely decoupled and easy to expand. Use the embedded Tomcat method to debug the source code: GitHub: https://github.com/UniqueDong/tomcat-embedded The above is the detailed content of analyzing Tomcat's architectural principles to architectural design. For more information about Tomcat's architectural principles and architectural design, please pay attention to other related articles on 123WORDPRESS.COM! You may also be interested in:
|
<<: The homepage design best reflects the level of the web designer
>>: Detailed explanation of Vue custom instructions
Basic network configuration Although Docker can &...
1. Stop the MySQL service in the command line: ne...
There are many ways to write and validate form fi...
This article mainly introduces the full-screen dr...
1. Implementation principle of Nginx load balanci...
Assuming business: View the salary information of...
Solution to MySQLSyntaxErrorException when connec...
1. First introduce several commonly used MySQL fu...
1. Write a Mysql link setting page first package ...
This article example shares the specific code of ...
Stored Functions What is a stored function: It en...
Hyperf official website Hyperf official documenta...
Table of contents 1. Initial SQL Preparation 2. M...
View historical commands and execute specified co...
Table of contents 1. Get the first link first 2. ...