Architecture Evolution With Mulesoft

Monolithic Architecture (Single Unit)

Monolithic architecture could be defined as the first architecture. Simple and tightly-coupled applications, they are executed in a single application layer and group all functionalities in the same one.

If, for example, we want to access another service or system through an API, we must develop business logic as well as error management and so on in the application itself. The following diagram shows a simple example of monolithic architecture on Customer Relationship Management.

For small architectures, they work well, but when the architecture grows, the application is more complex to manage and refactor. In addition, it makes continuous integration more complicated to carry out, making the DevOps process almost impossible to accomplish.

The communication between resources and/or applications is direct without any other middleware/ESB intervening. It even increases the level of difficulty when it comes to implementing communication with a web service in some languages such as Java, where the connection with a SOAP service is complex. For more Mule 4 Training

SOA Architecture (Coarse-Grained)

SOA (Service-Oriented) architecture already allows for greater decoupling and therefore evolution to a more diversified architecture, or as they call it, coarse-grained.

This is the original architecture of Mulesoft, the ESB that allows to centralize all the business logic and allows the connection between services and applications regardless of their technology or language in a fast and simple way.

Mulesoft offers Mule Runtime, similar to Apache Tomcat, which works as a servlet container, as defined in the following diagram.

In this way, we eliminate all the work and most of the business logic to the application with monolithic architecture. The ESB will be in charge of transforming the data, routing, accessing the necessary services, managing errors, etc. The source application will simply generate a message (if necessary) and send it to the ESB via HTTP request.

However, one problem persists, and that is that all the integrations deployed work on the same runtime by leading it to the coupling and to an architecture that continues to have monolithic nature. For example, when you apply a configuration in the runtime, it will be applied to all your deployed applications. Learn more at Mulesoft Training

Microservice Architecture (Fine-Grained)

Finally, the fine-grained one. This architecture imitates SOA but with smaller and independent services. Microservices bring a lot of complexity to the architectural level as there are many small actors involved, but the advantage is that they are all isolated and independent.

The limits must be very clear, reducing it too much can end up with a very complex and excessive architecture.

The use of microservices requires a great change of mentality, things must be simple, well documented, simple to execute. This is why a development cycle should also be proposed/used to execute, implement and evolve quickly.

Mulesoft has also evolved and is no longer just a middleware with SOA architecture, now also focuses on the architecture of microservices with its integration as a service (SaaS) platform, Anypoint Platform.

In this way, through its Cloudhub storage platform (integrated with the Anypoint Platform), you can deploy applications so that they are automatically created in separate instances without realizing it. Get additional info at Mule Training

In addition, Mulesoft’s methodical way of connecting data and applications through reusable and useful APIs, API-led connectivity, which helps to decouple between the implementation and the API. API-led connectivity is divided into three layers, Experience Layer, Process Layer, and System Layer. The first layer is the one that interacts with the client and has no implementation, only an exposed API that can be managed and secured.

But there’s still one more evolution. Thanks to Anypoint Runtime Fabric and Runtime Manager (integrated with Anypoint Platform), these applications can be deployed on Runtimes in instances on infrastructures managed by the client in AWS, Google Cloud, Azure, virtual machines, or bare metal.

To get in-depth knowledge, enroll for a live free demo on Mulesoft Online Training

Executing Shell Scripts From Mule

Sometimes it is better to perform a task directly with the help of the OS rather than using wrapper components. For example, if we just want to move a file from one directory to another without any transformation, we can avoid loading the file into the memory and can move the file directly using a shell script.

Objective: Executing shell scripts from a Mule custom Java component.

In this tutorial, I’ll pass the filename from HTTP Post to the flow, and later, my custom Java component will execute the shell script command to move the file.

Steps

1. Create a flow by adding an HTTP Listener and do the basic configuration.

   <flow name="fileMoverFlow">
        <http:listener config-ref="HTTP_Listener_Configuration" path="/moveFile" allowedMethods="POST" doc:name="HTTP"/>
    </flow>

Now create a .sh (or any extension) file and put your commands into it. You can dynamically pass values into this script file using a template component. For example: you can pass the file name to this script by putting #[payload] in the script, then parsing it through a template component.

I created a file, the content of which is Mulesoft Online Training

mv /input/#[payload] /output/

As you can see, this script has an MEL expression in it. To populate values through MEL expressions, we ll pass this script through the template.

Now the flow will become

<flow name="fileMoverFlow">
        <http:listener config-ref="HTTP_Listener_Configuration" path="/moveFile" allowedMethods="POST" doc:name="HTTP"/>
        <parse-template location="scripts/fileMover.sh" doc:name="Parse Template"/>
    </flow>

Now, I create a Java Class which will get shell script commands as a string input and will execute them.

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.util.ArrayList;
import java.util.List;
import org.mule.api.MuleMessage;
import org.mule.api.transformer.TransformerException;
import org.mule.transformer.AbstractMessageTransformer;
public class ScriptExecuter extends AbstractMessageTransformer {
@Override
public Object transformMessage(MuleMessage message, String outputEncoding) throws TransformerException {
// this is your script in a string
String script = (String) message.getPayload();
List<String> commandList = new ArrayList<>();
commandList.add("/bin/sh");
ProcessBuilder builder = new ProcessBuilder(commandList);
builder.redirectErrorStream(true);
Process shell;
try {
shell = builder.start();
try (OutputStream commands = shell.getOutputStream()) {
commands.write(script.getBytes());
}
// read the outcome
try (BufferedReader reader = new BufferedReader(new InputStreamReader(shell.getInputStream()))) {
String line;
while ((line = reader.readLine()) != null) {
System.out.println(line);
}
}
// check the exit code
int exitCode = shell.waitFor();
System.out.println("EXIT CODE: " + exitCode);
} catch (IOException | InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return "Success";
}
}

Now, place a Groovy component and call the script executer class to run your script.

Th final flow is as follows:

    <flow name="fileMoverFlow">
        <http:listener config-ref="HTTP_Listener_Configuration" path="/moveFile" allowedMethods="POST" doc:name="HTTP"/>
        <parse-template location="scripts/fileMover.sh" doc:name="Parse Template"/>
        <set-payload value="#[payload:java.lang.String]" doc:name="Set Payload"/>
        <custom-transformer class="com.test.ScriptExecuter" doc:name="Execute Script"/>
    </flow>

To get in-depth knowledge, enroll for a live free demo on Mulesoft Training

Integrating Mule ESB with .NET Based Rules Engines

Why do I want to do this?

Utilizing a rules engine promotes efficiency in system interfaces where some business logic needs to be executed and this logic can be frequently updated.

You could wire all of this logic into your integration application via custom code or using several routers but these rules become difficult to maintain in code and may require several re-deployments as changes are introduced.

Using a rules engine allows developers to decouple business logic from integration logic and as a result, rules can be easily maintained.

MuleSoft recognizes that organizations may have made significant investments in .NET based rules engines and these rules may need to be extended from their legacy platforms for a period of time.

As a result of business transformation, there may be new requirements that are better suited for a modern integration platform to address in order to support API or SaaS integration use cases. For programming skills Mulesoft online Training

MuleSoft’s Anypoint Platform is able to support these use cases by providing the agility and connectivity to enable these business scenarios while supporting an existing rules engine platform.

Connecting the BizTalk Rules Engine

To demonstrate this concept, we will take a recent customer scenario for connecting Anypoint Platform to the BizTalk Rules Engine. This MuleSoft customer has seen their IT landscape evolve and they now have requirements to support SaaS based endpoints and comprehensive API lifecycle management.

Much like any organization, there were some timing constraints in making a complete transformation. As a result, they wanted to continue to leverage their legacy rules engine for a period of time while this transition was taking place.

High Level Architecture

In this simplified walk-through we are going to have a consuming application that would like to perform a credit check on a particular customer request.

This scenario is well suited for using a business rules engine as there is different logic involved in performing this credit check that can change frequently.

When the credit check logic does require modification, we do not have to redeploy a lot of different components; just a set of business rules. A rules engine provides a ‘separation of concerns’ that allows an ESB to focus on what ESBs are good at: connectivity and routing. Learn more skills from Mule 4 Training

The flow of our solution follows:

  • A Consuming Application will reach out to Mule ESB via an HTTP request.
  • Mule ESB in turn will call a native .NET method via the .NET Connector. This connectivity is enabled via the .NET Connector that was released in July 2014. The .NET Connector allows developers to call .Net assemblies that have been written in any .NET language.
  • The BizTalk Rules Engine does expose an API which can be consumed from any .NET assembly or application.  We are able to take advantage of this API within our .NET method.
  • We will pass a TypedXML document to the BizTalk Rules Engine which is the data format that the BizTalk Rules Engine is expecting.
  • The BizTalk Rules Engine will evaluate the incoming message and then run it through a series of 5 different rules that evaluate the Customer Group that that the customer belongs to in SalesForce and the max credit threshold for that particular group.
  • A boolean value will be returned that includes a flag indicating whether or not the credit check has been approved or not

For more in-depth knowledge, enroll for a live free demo on Mulesoft Training

Running Mule CE Embedded with Spring Boot

In recent years there has been a surge in the idea of microservices. Although this term is vague in nature there are some ideas on how to deploy and run applications with “microservices” in mind.

Spring has come to the forefront of the microservice architecture with its “opinionated view of building production-ready Spring applications”.

While Spring Boot provides several “starter” configurations for most application needs, Spring will at times have to integrate or take a back seat to other systems.

By design, that is the beauty of Spring…it can utilized as a top level container or nicely integrated into your current solution. for more skills Mulesoft Certification

Mule CE is an open source integration tool. Mule CE applications are normally run inside a Mule runtime. With mule-spring-boot-starter, you can run Mule CE embedded in a Spring Boot application.

This allows Mule developers to quickly prototype and/or deploy Mule applications without having to download Mule runtime, create a Maven artifact, and push the artifact to the Mule runtime.

This project will allow developers to build and run the Mule application in much the same manner as other Spring Boot applications.

Add Maven Dependency:

<dependency>
    <groupId>net.taptech</groupId>
    <artifactId>mule-spring-boot-starter</artifactId>
    <version>1.0.0</version>
</dependency>

Add Repositories:

<repositories>
    <repository>
        <id>Central</id>
        <name>Central</name>
        <url>http://repo1.maven.org/maven2/</url>
        <layout>default</layout>
    </repository>
    <repository>
        <id>mulesoft-releases</id>
        <name>MuleSoft Repository</name>
        <url>http://repository.mulesoft.org/releases/</url>
        <layout>default</layout>
    </repository>
    <repository>
        <id>mulesoft-snapshots</id>
        <name>MuleSoft Snapshot Repository</name>
        <url>http://repository.mulesoft.org/snapshots/</url>
        <layout>default</layout>
    </repository>
    <repository>
        <id>spring-milestones</id>
        <name>Spring Milestones Repository</name>
        <url>http://repo.spring.io/milestone</url>
        <layout>default</layout>
    </repository>
    <repository>
        <id>spock-snapshots</id>
        <url>https://oss.sonatype.org/content/repositories/snapshots/</url>
        <snapshots>
            <enabled>true</enabled>
        </snapshots>
    </repository>
</repositories>

Add Mule Modules and Dependencies as Needed:

the HTTP connector is used so that a jar file is needed for the XML schema and corresponding Java code to support the HTTP connector.

<dependency>
    <groupId>org.mule.transports</groupId>
    <artifactId>mule-transport-http</artifactId>
    <version>${mule.version}</version>
</dependency>

Create a Mule Config File:

Make sure this file is in the artifact classpath. Create an application property called
 mule.config.files. Add a comma separated list of mule config files.

mule.config.files=mule-config.xml

Here’s an example mule-config.xml file: For additional skills Mulesoft Online Training

<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd">
    <http:listener-config name="HTTP_Listener_Configuration" host="0.0.0.0" port="8081" doc:name="HTTP Listener Configuration" />
    <flow name="mule-demo-filesFlow">
        <logger level="INFO" doc:name="Logger" category="net.taptech" message="It works!!!!" />
        <set-payload value="testing" doc:name="Set Payload" />
    </flow>
    <flow name="mule-demo-filesFlow1">
        <http:listener config-ref="HTTP_Listener_Configuration" path="/test" doc:name="HTTP" />
        <flow-ref name="mule-demo-filesFlow" doc:name="mule-demo-filesFlow" />
    </flow>
</mule>

Add Annotation to Your Spring Boot Application Entry Point:

@EnableMuleConfiguration

@EnableMuleConfiguration
@SpringBootApplication
public class DemoMuleSpringBootApplication {
 private static final Logger logger = LoggerFactory.getLogger(DemoMuleSpringBootApplication.class);
 @Autowired
 private ApplicationContext context;
 public static void main(String...args) {
  logger.info("Starting SpringApplication...");
  SpringApplication app = new SpringApplication(DemoMuleSpringBootApplication.class);
  app.setBannerMode(Banner.Mode.CONSOLE);
  app.setWebEnvironment(false);
  app.run();
  logger.info("SpringApplication has started...");
 }
}

To get in-depth knowledge, enroll for a live free demo on Mulesoft Training

How to Design a RAML-Based REST API With Mulesoft Anypoint API Manager

RAML (Rest API Modeling Language) is based on YAML format, which is used to design REST APIs. RAML provides various features, including standardization, reusability, easy readability, and much more. 

Now, we will walk through how to create REST APIs using API Manager.

Create an Anypoint MuleSoft Account

Create an Anypoint MuleSoft account and sign into the Anypoint MuleSoft platform.

Add a New API

First, go to API Manager as shown below:

Click API Manager to reach the API Manager Screen. Then, click Add New API. Fill in the details like API name and Version name. Description and API endpoint are optional.

After filling in these details, click Add.

Define the API

Next, click Define API in the API Designer.

We will be navigated to API Manager Designer, where we can start writing the RAML. We can see the documentation on the right side, depending on the RAML we are writing.

On the left side, we can see the RAML file name. By default, the file name will be api.raml. We can rename the RAML filename by right clicking on api.raml. The extension of a RAML file is always .raml. For more info Mulesoft Online Training

Test the API from API Manager

We need to enable Mocking Service for testing the API. Once we will enable the mocking service, then baseURI will be added to the RAML.

After enabling the mocking service, we can test the REST API by clicking on any of the HTTP methods on the right side of the screen. Then, click Try it!.

RAML Example

#%RAML 0.8
title: Book Service API
version: 1.0
/users:
  /authors:
    description: This is used to get authors details
    get:
      responses: 
        200:
          body:
            application/json:
              example: |
                {
                "authorID":1,
                "authorName":"Robert",
                "DOB":"21/04/1986",
                "Age":30
                }  
    post:
      body: 
        application/json:
          example: |
            {
            "authorID":1,
            "authorName":"Stephen",
            "DOB":"21/04/1986",
            "Age":30
             } 
      responses: 
        201:
          body: 
            application/json:
              example: |
                {
                "message":"Author updated {but not really"
                }
    /{authorID}:
      get:
        responses: 
          200:
            body: 
              application/json:
                example: |
                  {
                  "TotalBooks":30,
                  "Subject":"Maths,Science",
                  "Publication":"Nirvana"
                  }
  /books:
    get: 
    post:
    put:
    /{bookTitle}:
      get:
        queryParameters: 
          author:
            displayName: Author
            description: The Author's Full Name
            type: string
            required: false
            example: Michael Lynn
          publicationYear:
            displayName: Publication Year
            description: Year of Publication
            type: number
            required: false
            example: 2010
          rating:
            displayName: Rating
            description: Average Rating of book submitted by users (1-5)
            type: number
            required: false
            example: 3.5
          isbn:
            displayName: ISBN
            description: ISBN Number of Book
            type: string
            maxLength: 10
            example: 1234567
            required: false
        responses: 
          200:
            body: 
              application/json:
                example: |
                  {
                  "id": "123",
                  "title": "API Design",
                  "description": null,
                  "datetime": 1341533193,
                  "author": "Mary"
                  }
      put:
      delete:
      /author:
        get:
      /publisher:
        get:

Including Example Responses

Simulating calls to the API is a critical design task that helps you troubleshoot problems and demo the API to prospective users even before you have implemented it. Valuable feedback can result from a demo and help you improve the API. To simulate calls to the API, you include the following things:

  • A JSON example of data an actual implementation of the API would return
  • An HTTP status codes the API returns on success or a failure message.

You use the mocking service in API Console to provide a base URI for the unimplemented API. When you make a successful call to the API, the mocking service returns the status code 200 and example.

To get in-depth knowledge, enroll for a live free demo on Mulesoft Training

Transaction Management in MuleSoft/Anypoint Studio

Mule handles the concept of transactions in a similar way. A transaction will be either complete and succeed, or it will be incomplete and fail. If a transaction fails, Mule rolls back every single operation and there will be no commit.

How Mule Starts a New Transaction

The Mule flow starts a new transaction when a flow begins with a transactional resource (inbound connector).

The flow manages the outgoing operation as a transaction if the flow includes a transactional outbound connector.

In a flow with both an Inbound and Outbound connector, a transaction is initiated by an Inbound connector and the outgoing operation (Outbound) becomes a part of the same transaction. Learn more skills from Mule Training

Transaction Types

Mule supports three types of transactions:

  1. Single resource
  2. Multiple resource
  3. XA transactions (extended architecture)

Single Resource Transactions

These are used to send or receive messages using only one single resource – JMS, VM, or JDBC.

The example below illustrates a flow which includes a single resource transaction applied to inbound and outbound JMS connectors. 

<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:jms="http://www.mulesoft.org/schema/mule/jms" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"
xmlns:spring="http://www.springframework.org/schema/beans" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/jms http://www.mulesoft.org/schema/mule/jms/current/mule-jms.xsd">
    <jms:connector name="JMS" validateConnections="true" doc:name="JMS"/>
    <flow name="transactionmanagementFlow">
        <jms:inbound-endpoint queue="test.input" connector-ref="JMS" doc:name="JMS">
            <jms:transaction action="ALWAYS_BEGIN"/>
        </jms:inbound-endpoint>
        <logger level="INFO" doc:name="Logger"/>
        <jms:outbound-endpoint doc:name="JMS" connector-ref="JMS" queue="test.output">
            <jms:transaction action="ALWAYS_JOIN"/>
        </jms:outbound-endpoint>
    </flow>
</mule>

The above snippet has a JMS connector that receives messages on a test.input queue and another JMS connector that sends messages on a test.output queue. The action attribute tells how Mule initiates a transaction.

In this snippet, a new transaction is initiated for every message received on the inbound connector and Mule will participate in a transaction in progress for every message it sends out on the outbound connector.

A commit will happen only for those messages which pass successfully through the complete flow. For more Mulesoft Online Training

Performance

It performs better than multi-resource or XA transactions.

Multiple Resource Transactions

These are used to group together operations from multiple JMS or VM resources into a single transaction. If there is a need to group two different connectors to consolidate in a transaction, then we can use a multiple resource transaction.

<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:wmq="http://www.mulesoft.org/schema/mule/ee/wmq" xmlns:jbossts="http://www.mulesoft.org/schema/mule/jbossts" xmlns:ee="http://www.mulesoft.org/schema/mule/ee/core" xmlns:jms="http://www.mulesoft.org/schema/mule/jms" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"
xmlns:spring="http://www.springframework.org/schema/beans" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/jms http://www.mulesoft.org/schema/mule/jms/current/mule-jms.xsd
http://www.mulesoft.org/schema/mule/ee/core http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd
http://www.mulesoft.org/schema/mule/jbossts http://www.mulesoft.org/schema/mule/jbossts/current/mule-jbossts.xsd
http://www.mulesoft.org/schema/mule/ee/wmq http://www.mulesoft.org/schema/mule/ee/wmq/current/mule-wmq-ee.xsd">
    <jms:connector name="JMS" validateConnections="true" doc:name="JMS"/>
    <wmq:connector name="WMQ" hostName="localhost" port="1414" transportType="CLIENT_MQ_TCPIP" validateConnections="true" doc:name="WMQ"/>
    <flow name="transactionmanagementFlow">
        <jms:inbound-endpoint queue="test.input" connector-ref="JMS" doc:name="JMS">
            <ee:multi-transaction action="ALWAYS_BEGIN"/>
        </jms:inbound-endpoint>
        <logger level="INFO" doc:name="Logger"/>
        <wmq:outbound-endpoint queue="test.output" connector-ref="WMQ" doc:name="WMQ">
            <ee:multi-transaction action="ALWAYS_JOIN"/>
        </wmq:outbound-endpoint>
    </flow>
</mule>

In the above example, a JMS transaction is initiated when Mule receives the message on the inbound JMS connector. Another WMQ transaction is initiated when it sends the message out on the outbound connector. As the connectors are configured as multiple resource transactions, WMQ transaction will be committed first and then it will commit the JMS transaction.

<ee:multi-transaction action=”ALWAYS_BEGIN”/>
<ee:multi-transaction action=”ALWAYS_JOIN”/>

Mule will either commit both transactions successfully or rolls back both of them as one unit.  

If an inbound connector is configured as a multi-resource transaction, then other connectors should be configured as multi-resource transactions and the action set should be configured as “ALWAYS_JOIN” to make it part of the transactional unit.

Performance

  • It performs better than XA transactions but slower than Single Resource transactions. 
  • It can handle partial commits and rollbacks. 
  • It uses 1.5 phase commit protocol and is less reliable than XA.

To get in-depth knowledge, enroll for a live free demo on Mulesoft Training

Integration From ETL tools to Mule ESBs

In the IT landscape, ETL (extract, transform, load) processes have long been used for building data warehouses and enabling reporting systems. Using business intelligence (BI) oriented ETL processes, businesses extract data from highly distributed sources, transform it through manipulation, parsing, and formatting, and load it into staging databases.

From this staging area data, summarizations, and analytical processes then populate data warehouses and data marts.

How ETL tools came to operational integration

Most certainly, ETL tools have their place in the IT environment, as numerous database admins utilize ETL tools to facilitate process and deliver optimal value to business.

  • Data warehousing: Historically, the primary use for ETL tools has been to enable business intelligence. Pulling databases, application data and reference data into data warehouses provide businesses with visibility into their operations over time and enable management to make better decisions.
  • Data integration: Data integration allows companies to migrate, transform, and consolidate information quickly and efficiently between systems of all kinds. ETL tools reduce the pain of manually entering data and allow dissimilar systems to communicate, all the while supplying a unified view.
  • The leading ETL tool, Informatica PowerCenter, has a long history in the data integration space. Its success can be attributed to its cross functionality, reusable components, and automatable processes.
  • Optimized for moving large amounts of data in a batch-oriented manner, PowerCenter and similar ETL tools have been used to integrate enterprise applications across heterogeneous environments.

The leading ETL tool, Informatica PowerCenter, has a long history in the data integration space. Its success can be attributed to its cross functionality, reusable components, and automatable processes. For more info Mule Training

Optimized for moving large amounts of data in a batch-oriented manner, PowerCenter and similar ETL tools have been used to integrate enterprise applications across heterogeneous environments.

ETL tools for operational data integration

Operational databases house transactional data such as employee information, sales, customer feedback, and PoS information. These databases provide the foundation for the operational systems and applications that run the business.

As operations increasingly required these systems to integrate with each other, existing ETL tools provided an obvious solution. Already supporting data-level connectivity to many popular databases and applications, ETL tools became a quick and seemingly simple means of connectivity and data movement.

In a time when APIs were not as abundant, ETL tools were the go-to solution for operational use cases.

ETL tools get complicated

ETL tools indeed provide a method of communication between databases and applications, but pose significant challenges over time. Because creating this type of connectivity requires an comprehensive knowledge of each operational database or application, interconnectivity can get complicated as it calls for implementing very invasive custom integrations.

APIs and ESBs simplify data integration

The increase in popularity of APIs has also made it much easier to create connectivity. With APIs, developers can access endpoints and build connections without having in-depth knowledge of the system itself, simplifying processes tremendously.

As ETL tools remained focused more towards BI and big data solutions, and as traditional operational data integration methods become outdated with the rise in popularity of cloud computing, ESBs become better options to create connectivity.

An enterprise service bus (ESB) provides API-based connectivity with real-time integration. Unlike traditional ETL tools used for data integration, an ESB isolates applications and databases from one another by providing a middle service layer.

This abstraction layer reduces dependencies by decoupling systems and provides flexibility. Developers can utilize pre-built connectors to easily create integrations without extensive knowledge of specific application and database internals, and can very quickly makes changes without fear of the entire integrated system falling apart.

Shielded by APIs, applications and databases can be modified and upgraded without unexpected consequences. In comparison to utilizing ETL tools for operational integration, an ESB provides a much more logical and well defined approach to take on such an initiative.

Learn more programming skills from Mulesoft Online Training

MuleSoft provides an integration platform

MuleSoft offers an ESB solution to help businesses with their integration needs. Mule as an ESB is a component within  Anypoin Platform. The next generation platform consists of a set of products that help businesses connect SaaS, cloud, mobile, and on-premises applications and services as well as data sources.

Working with numerous other components, Anypoint Platform delivers a robust integration solution for businesses:

  • DataWeave: A graphical data mapping solution that works with Mule as an ESB and Anypoint Studio (MuleSoft’s graphical design environment) to provide powerful data integration capabilities with an easy to use interface.
  • Anypoint Connectors: Leveraging a library of Anypoint Connectors enables instant API connectivity to hundreds of popular applications and services, making it easy to extract and load data into popular sources and endpoints.
  • Visual data mapping: A graphical interface eliminates the need for intricate manual coding. Simply use the graphical drag and drop interface of Anypoint Studio to accelerate development productivity.
  • File type support: With support for flat and structured data formats such as XML, JSON, CSV, POJOs, Excel, and much more, organizations have flexibility over which data formats to use.
  • Database-level connectivity: For cases where direct interaction to databases is required, MuleSoft’s Anypoint Connectors include options to connect to relational databases, as well as emerging Big Data platforms like MongoDB and Hadoop.

To get in-depth knowledge, enroll for a live free demo on Mulesoft Training

How to do Mule Deployment with Maven and Jenkins Pipeline

Mule Application Builds and Deployment can be fully managed using Maven. In the development phase, Anypoint Studio makes it easy to manage your application dependencies using Maven.

For Deployment tasks, Mule provides a Maven plugin that can help to automate the application deployment to different target runtime environments such as Standalone, CloudHub, ARM, Cluster and more from Mulesoft Certification

There are different ways to Install and Run Mule Runtime –

  1. Local Standalone Server: Mule Runtime installation on your local server and Mule runs as a single server instance.
  2. Local Cluster: Similar individual instance set up on local, except they are part of a cluster to interact with each other.
  3. CloudHub: Integration platform as a service (iPaas) provided by MuleSoft, where you Mule runs in the cloud.
  4. Pivotal Cloud Foundry: Mule Runtime deployed on Pivotal Cloud Foundry.

Deployments to all these environments can be managed through Manual Copy (For Local), Anypoint Runtime Manager, or Maven Deploy Plugin.

In this post, we will see how we can leverage Mule Maven plugin to perform deployments. We will do a deployment to Standalone Mule Instance as well as to CloudHub Instance. In the end, we will also automate our deployment using Jenkins Pipeline. Learn more info from Mule Training

Version used:

  • Mule Runtime 3.9.0
  • Jenkins 2.11.0
  • Mule Maven Plugin 2.3.3

Mule Maven Plugin

The mule-maven-plugin is a Maven Plugin provided as a part of Mule Framework. It allows us to run integration tests for Mule Application, and also to deploy the application to various environments.

Configuration

This plugin is available in the Mulesoft public repository, so you will need to add below repository to your settings.xml –

<pluginRepositories>
    <pluginRepository>
        <id>mule-public</id>
        <url>https://repository.mulesoft.org/nexus/content/repositories/releases</url>
    </pluginRepository>
</pluginRepositories>

Once you have added the repository, you can include the plugin in your pom.xml as –

<plugin>
  <groupId>org.mule.tools.maven</groupId>
  <artifactId>mule-maven-plugin</artifactId>
  <version>2.3.3</version>
</plugin>

At this point, you are ready to add configuration for your target server/environment.

Deployment Profiles:

We may have different environments available for deploying our application. It is even possible that, you have a mix of Local Standalone Server and CloudHub approach.

So, Instead of configuring our pom.xml for a single environment, we will use a Maven parent POM approach. We will define a Maven Parent POM and add a Maven Profile for each of our target environments.

Below profile will help us deploying to local mule server –

<profile>
	<id>standalone</id>
	<build>
		<plugins>
			<plugin>
				<groupId>org.mule.tools.maven</groupId>
				<artifactId>mule-maven-plugin</artifactId>
				<configuration>
                  <standaloneDeployment>
                    <muleVersion>${mule.version}</muleVersion> 
                    <muleHome>${mule.home}</muleHome> 
                    <applicationName>${artifactId}</applicationName> 
                  </standaloneDeployment>
				</configuration>
				<executions>
					<execution>
						<id>deploy</id>
						<phase>deploy</phase>
						<goals>
							<goal>deploy</goal>
						</goals>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>
</profile>
Target Mule Version if Mule should be installed as a part of deployment.
This should be a path to our locally installed mule server, such as /opt/mule. Mule Version and Mule Home are exclusive to each other.
Application name to be used when deploying to Mule instance.

Below profile will help for CloudHub Deployment –

<profile>
  <id>cloudhub</id>
  <build>
    <plugins>
      <plugin>
        <groupId>org.mule.tools.maven</groupId>
        <artifactId>mule-maven-plugin</artifactId>
        <configuration>
          <cloudHubDeployment>
            <muleVersion>${mule.version}</muleVersion> 
            <username>${anypoint.username}</username> 
            <password>${anypoint.password}</password> 
            <applicationName>${artifactId}-${maven.build.timestamp}</applicationName> 
            <environment>${cloundhub.env}</environment> 
            <businessGroup>${anypoint.businessGroup}</businessGroup>
            <region>${cloudhub.region}</region>
            <uri>${anypoint.uri}</uri> 
            <workerType>${cloudhub.workerType}</workerType> 
            <workers>${cloudhub.workers}</workers>
          </cloudHubDeployment>
        </configuration>
        <executions>
          <execution>
            <id>deploy</id>
            <phase>deploy</phase>
            <goals>
              <goal>deploy</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</profile>

Target Mule Runtime Version where Application should be deployed.Anypoint Platform Username with appropriate access to target environment.Anypoint Platform User’s password.Application Name for CloudHub.

To generate unique name for demo application, timestamp is appended to the name.Target Environment as configured on ARM.If you are running ARM on private instance then specify url here. For more additional info Mulesoft Training

Eg. anypoint.example.com Worker Type to be one of: Micro (0.1 vCores), Small (0.2 vCores), Medium (1 vCores), Large (2 vCores), xLarge (4 vCores).

Jenkins Pipeline:

As per Jenkins website –

Jenkins Pipeline (or simply “Pipeline”) is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.

In this post, I will not go into much detail about Pipeline and presume that you are aware of it. This Jenkins Document may give you a good overview of what Pipeline is.

Jenkins Setup

CloudHub and ARM deployment would need to access Anypoint Platform. Let’s add Anypoint Credentials in Jenkins Credential. We will use these credentials in our Pipeline to avoid hardcoding and password exposure.

If you do not specify anything in ID attribute, Jenkins will assign and auto generated ID. We will need this ID value for credential lookup in Pipeline.

Simple Pipeline

Now, we have all that we need for adding a Jenkins Pipeline to our Mule Application. For this, we will need to create a file named Jenkinsfile in the root of our application.

Here is the content of our Jenkinsfile –

pipeline {
  agent any
  stages {
    stage('Unit Test') { 
      steps {
        sh 'mvn clean test'
      }
    }
    stage('Deploy Standalone') { 
      steps {
        sh 'mvn deploy -P standalone'
      }
    }
    stage('Deploy ARM') { 
      environment {
        ANYPOINT_CREDENTIALS = credentials('anypoint.credentials') 
      }
      steps {
        sh 'mvn deploy -P arm -Darm.target.name=local-3.9.0-ee -Danypoint.username=${ANYPOINT_CREDENTIALS_USR}  -Danypoint.password=${ANYPOINT_CREDENTIALS_PSW}' 
      }
    }
    stage('Deploy CloudHub') { 
      environment {
        ANYPOINT_CREDENTIALS = credentials('anypoint.credentials')
      }
      steps {
        sh 'mvn deploy -P cloudhub -Dmule.version=3.9.0 -Danypoint.username=${ANYPOINT_CREDENTIALS_USR} -Danypoint.password=${ANYPOINT_CREDENTIALS_PSW}' 
      }
    }
  }
}

Stage to Run Unit tests on our Mule Application.Deploy our Mule Application to Standalone server. As we are not passing in mule.home parameter value, this step will actually download the Mule server from Mule Maven Repository and install it in Jenkins workspace.

If you want to deploy to already installed instance, then you may add a parameter -Dmule.home=/opt/mule in maven command.Deploy to a Mule Server Managed via Anypoint Runtime Manager.

Credential Lookup: We need Anypoint Platforms credentials for this step. We will use jenkins credentials() function to lookup our credentials using ID value.

This function then automatically sets two environment variables named {ourvariable}_USR and {ourvariable}_PSW.Use Credential variables to set Anypoint Username and Password while calling our arm profile.

Deploy to CloudHub using cloudhub profile. Credentials setup is same as deploying to ARM.For CloudHub deployments, mule.version must be same as it appears on CloudHub available Runtime version names.

Mule Runtime can be installed on many platforms and Mule Maven plugin does simplify the deployment process.

We can also do integration tests using this plugin. We used Mule Maven plugin and Jenkins Pipeline to automate our mule deployment to Anypoint Runtime Manager as well as CloudHub environment.

To get in-depth knowledge, enroll for a live free demo on Mulesoft Online Training

FTP Connector With Mule ESB

The FTP Connector implements a file transport channel so that your Mule application can exchange files with an external FTP server.

You can configure FTP as an inbound endpoint (which receives files) or outbound endpoint (which writes files to the FTP server). The FTP transport allows integration of the File Transfer Protocol into Mule.

Mule can poll a remote FTP server directory, retrieve files. and process them as Mule messages. Messages can also be uploaded as files to a directory on a remote FTP server. Default behavior of the inbound FTP connector is it will pick the file and delete from source folder. Fore more additional info Mulesoft Certification

FTP Connector as Inbound:

Drag and drop the FTP connector to the canvas and place it at the message source.

Configure the FTP connector and provide the information like host, port, path, username, and password.

As a default, the behavior of the FTP connector is to read the file and delete it from the FTP server. There is an option available with the FTP connector that instead of deleting the file, you can move it to a backup folder on the FTP server.

FTP Connector as Outbound:

The FTP connector can be configured as Outbound to send the file to the destination. For the FTP connector to work as Outbound, place the FTP connector in the message processor region of the Mule flow.

Dynamic FTP Connections

Many integrations require connecting to different servers depending on a certain condition.

Examples of this include:

  • Connect to different invoice storage servers depending on the branch that emit an invoice.
  • Connect to different servers depending on an integration subject, such as in a multi-tenant use case.

To accommodate these use cases, the config element supports expressions, which makes it possible for connection parameters to evaluate these conditions and connect to the correct server. For more additional skills Mulesoft Online Training

Dynamic FTP Connection Example

The following example shows a dynamic multicast application that:

  1. Defines a config for the connector in which the hostusername, and password are expressions.
  2. Describes a flow in which content is posted through HTTP.
  3. Uses the File connector to read a CSV file that contains a random set of FTP destinations with columns such as host, user, and port.
  4. Uses a <foreach> component to iterate over each of the lines in the CSV file.
  5. On each <foreach> iteration, resolves each expression in the connector config to a different value, establishing different connections to each of the servers.
<ftp:config name="FTP_Config" doc:name="FTP Config" >

<ftp:sftp-connection host="#[payload.host]" username="#[payload.user]" password="#[payload.password]" />

</ftp:config>

<flow name="streaming-multitenantFlow" >
<http:listener config-ref="HTTP_Listener_config" path="/multitenant"
doc:name="Listener" />
<set-variable variableName="content" value="#[payload]" doc:name="Variable" />
<file:read config-ref="File_Config" path="recipients.csv" doc:name="Read"
 outputMimeType="application/csv" />
<ee:transform doc:name="Transform Message">
<ee:message>

<ee:set-payload ><![CDATA[%dw 2.0
output application/java
—
payload map using (row = $) {
   host: row.Host,
   user: row.User,
   password: row.Password
}]]>

  </ee:set-payload>
  </ee:message>
  </ee:transform>
  <foreach doc:name="For Each" >
    <ftp:write config-ref="FTP_Config" doc:name="Write" path="demo.txt">
    <ftp:content >#[content]</ftp:content>
  </ftp:write>
</foreach>
<set-payload doc:name="Set Payload" value="Multicast OK"/>
</flow>
  • This sample application defines an FTP config in which the host, username, and password are expressions.
  • It uses a flow in which a random content is posted.
  • It uses the file connector to load a recipients file, which is a CSV file containing a random set of FTP destinations.
  • There’s a DataWeave transformation that splits a CSV file.
  • The application uses a foreach element to write the contents into each of the FTP destinations.
  • On each foreach iteration, each of the expressions in the FTP config resolves to a different value, establishing different connections to each of the servers

To get in-depth knowledge, enroll for a live free demo on Mulesoft Training

HTTP Request Configuration Example

A typical use of the HTTP request operation is to consume an external HTTP service using the default GET method.

By default, the GET and OPTIONS methods do not send the payload in the request; the body of the HTTP request is empty.The other methods send the message payload as the body of your request.

After sending a request, the connector receives the response and passes it to the next element in your application’s flow. For more additional info Mule 4 Training

The required minimum setting for the request operation is a host URI, which can include a path. You can configure the following types of authentication in the request operation:

  • Basic
  • OAuth
  • NT LAN Manager (NTLM)
  • Digest

Map Between Mule Messages and HTTP Requests

By default, the HTTP request operation sends the Mule message payload as the HTTP request body but you can customize it using a DataWeave script or expression.

Add Custom Parameters

In addition to the body of the request, you can configure:

  • Headers
  • Query parameters
  • URI parameters
  • A map of multiple headers or parameters

By default, the HTTP connector limits the HTTP request header section size (request line + headers) to 8191 bytes maximum.

To increase the number of bytes in the allowed header section, set mule.http.headerSectionSize in the wrapper.conf file or on the command line when you start Mule as follows: Learn more skills from Mulesoft Training

./mule -M-Dmule.http.headerSectionSize=16000

Headers

Select Headers in the General configuration to add headers to the request. For example, add header names HeaderName1 and HeaderName2, add header values HeaderValue1 and HeaderValue2.

You can use DataWeave expressions, for example #[{‘HeaderName1’ : ‘HeaderValue1’, ‘HeaderName2’ : ‘HeaderValue2’}].

There are also use cases in which there are a set of default headers you always want to set. For example, you might want to always send authentication headers like x-csrf-token: Fetch.

You can specify default headers at the config level so that you don’t have to specify them on every single request. For example:

<http:request-config name="requestConfig">
    <http:request-connection host="localhost" port="8081" streamResponse="true" responseBufferSize="1024"/>
    <http:default-headers >
        <http:default-header key="x-csrf-token" value="Fetch" />
    </http:default-headers>
</http:request-config>

With this configuration, those headers will be added to every outbound request, alongside any headers you specify in the actual request.

The default headers also accept expressions, allowing you to use dynamic values. For example: Learn practical and programmatic skill from Mule Training

<http:request-config name="requestConfig">
    <http:request-connection host="localhost" port="8081" streamResponse="true" responseBufferSize="1024"/>
    <http:default-headers >
        <http:default-header key="custom-role" value="#[vars.role]" />
    </http:default-headers>
</http:request-config>

This practice is helpful and convenient but has some important caveats to consider. Using expressions in a configuration element constitutes a dynamic configuration.

The lifecycle of a dynamic configuration is not explained here, but in a nutshell, all expressions in the config are evaluated each time the connector is used and for each set of distinct values a new configuration instance is created and initialized.

For the particular case of the HTTP connector, this means that it’s only a good idea to use default headers with expressions when you are certain that the universe of all possible values that those expressions can take is small.

If, on the contrary, it is likely that every single evaluation is going to yield a different value, you’ll end up creating several instances of the HTTP client, which will affect performance (due to the extra work that initializing a client takes) and be resource-consuming.

For such cases, the recommendation is to not use default headers but rather to specify headers on the request operation itself:

<http:request config-ref="requestConfig" method="#[attributes.method]" path="#[attributes.maskedRequestPath]">
	<http:headers>#[{'custom-role':vars.role}]</http:headers>
</http:request>

Query Parameters

In General > Request > Query Parameters, click the plus icon (+) to add a parameter to a request. Type a name and value for the parameter or use a DataWeave expression to define the name and value.

To get in-depth knowledge, enroll for a live free demo on Mulesoft Online Training

Design a site like this with WordPress.com
Get started