Google Analytics with Tableau

This article describes how to connect Tableau to Google Analytics (GA) and set up the data source.

Before you begin

Before you begin, gather this connection information:

  • GA email address and password

Make the connection and set up the data source

  1. Start Tableau and under Connect, select Google Analytics. For a complete list of data connections, select More under To a Server. In the tab Tableau opens in your default browser, do the following:
    1. Sign in to GA using your email or phone, and then select Next to enter your password. If multiple accounts are listed, select the account that has the GA data you want to access, and enter the password, if you are not already signed in. For more info Tableau Training
    1. Select Allow so that Tableau Desktop can access your GA data.
    1. Close the browser window when notified to do so.
  2. On the data source page, do the following:
    1. (Optional) Select the default data source name at the top of the page, and then enter a unique data source name for use in Tableau. For example, use a data source naming convention that helps other users of the data source figure out which data source to connect to.
    1. Follow the steps at the top of the data source page to complete the connection.
      1.  – Select an Account, Property, and Profile using the drop-down menus.
      1.  – Select filters for a date range and a segment.
        1. For Date Range, you can select one of the predefined date ranges or select specific dates. When selecting a date range, GA can provide complete data only up to the previous full day. For example, if you choose Last 30 days, data will be retrieved for the last 30-day period ending yesterday.
        1. For Segment, select a segment to filter your data. Segments are preset filters that you can set for a GA connection. Default Segments are defined by Google, and Custom Segments are defined by the user on the GA website. Segments also help prevent sampling to occur by filtering the data as defined by the segment. For example, with a segment, you can get results for a specific platform, such as tablets, or for a particular search engine, such as Google. For more details Tableau Online Training
      1. Note: GA restricts the amount of data that it returns in a query. When you try to retrieve more data than GA allows in a single query, GA returns sampled data instead. If Tableau detects that your GA query might return sampled data, Tableau attempts to bypass the query restriction to return all data instead.

Step 3 – Add dimensions and measures by using the Add Dimension and Add Measure drop-down menus, or select a predefined set of measures from the Choose a Measure Group drop-down menu. Some dimensions and measures cannot be used together.

  • Select the sheet tab to start your analysis. After you select the sheet tab, Tableau imports the data by creating an extract. Note that Tableau Desktop supports only extracts for Google Analytics. You can update the data by refreshing the extract.

Google Analytics data source example

Here is an example of a Google Analytics data source connection using Tableau Desktop on a Windows computer:

All data vs. sampled data returned from a query

GA restricts the amount of data that it returns from a query and provides sampled data instead. Sampled data is a random subset of your data. When performing analysis on sampled data, you can miss interesting outliers, and aggregations can be inaccurate. If Tableau detects that your query might return sampled data, by default, Tableau creates multiple queries from your query, and then combines the results from the queries to return all data.

If the query stays within the boundaries of the query restrictions, GA doesn’t return sampled data and you do not see the above message. Learn from Tableau Online Course

Troubleshoot issues with returning all data

If your query continues to return sampled data, consider the following:

  • Missing date dimension – You must use the date dimension in your query to return all data.
  • Too much data – Your query might contain too much data. Reduce the date range. Note that the minimum date range is one day.
  • Non-aggregatable dimensions and measures – Some dimensions and measures cannot be separated into multiple queries. If you suspect a problematic dimension or measure in your query, hover over the All data button to see the tooltip that shows which dimensions or measures to remove from your query.
  • Legacy workbooks – Workbooks created in Tableau Desktop 9.1 and earlier cannot return all data. Open the legacy workbook in Tableau Desktop 9.2 and later and save the workbook.

Return sampled data

In cases when workbook performance is critical or there are specific dimensions and measures you want to use in your query that are not supported by Tableau’s default query process, use sampled data instead. To return sampled data, select the Sample data button.

To get in-depth knowledge, enroll for a live free demo from Tableau Advanced Training

Mule 4: Java Integration Expressions versus Code

Experienced Mule users will notice that Mule 4 takes a more opinionated approach about how to structure apps, which limits what can be done through the expression language. The intention is to provide a clear separation between the flow logic and the business logic that should be extracted through code.

If you want to extract, query, transform, or otherwise work with data in your flows, DataWeave expressions and transforms are the recommended tool. If you want to write custom logic, instantiate Java objects, or call arbitrary methods, MuleSoft recommends that you encapsulate this code into scripts or classes that can be injected and tested easily. Learn more skills from Mulesoft Training

This is why MuleSoft removed the Expression component and Expression transformer in favor of encouraging you to cleanly separate your logic into scripts or Java classes, instead.

Calling Static Java Methods from DataWeave

When you want to call out to Java logic to help format or parse data, DataWeave now allows you to call out to static methods. Consider this Java method:

package org.acme;
public class MyCompanyUtils {
  public static String reformat(String input) {
    return …;
  }
}

You can call it through the following DataWeave code:

import java!org::acme::MyCompanyUtils
---
{
  date: MyCompanyUtils::reformat(payload.input)
}

Scripting Module

The Scripting module replaces the Mule 3 Scripting component. The Mule 4 module enables you to embed your Groovy, Ruby, Python, or JavaScript scripts inside Mule flows. You can inject data from the Mule message into your code using the new parameters configuration attribute. get more from Mule 4 Training

<script:execute engine="groovy">
    <script:code>
         return "$payload $prop1 $prop2"
    </script:code>
    <script:parameters>
         #[{prop1: "Received", prop2: payload.body}]
    </script:parameters>
</script:execute>

To use the scripting module, simply add it to your app using the Studio palette, or add the following dependency in your pom.xml file:

<dependency>
  <groupId>org.mule.modules</groupId>
  <artifactId>mule-scripting-module</artifactId>
  <version>1.1.0</version> <!-- or newer -->
  <classifier>mule-plugin</classifier>
</dependency>

Java Module

While the Scripting module is a very powerful tool that allows for interoperation with Java by executing any random set of instructions, often you simply need to just instantiate a class or execute a single method. While Mule 3 usually relies on MEL for this, the Java module was introduced in Mule 4 to allow for these use cases. Other advantages of the Java module over the Scripting module are:

  • Support for DataSense: Each time you execute a method, you will get DataSense for the output type and the method’s input arguments.
  • UI Support: You get visual aids in terms of methods available for each class, autocompletion, and so on. learn more from Mule Training

Create a New Java Instance

In Mule 3:

<set-payload value="#[new com.foo.AppleEater()]" />
<set-payload value="#[new com.foo.AppleEater('some string arg', flowVars.apple)]" />

In Mule 4:

<java:new class="com.foo.AppleEater" constructor="MyClass()"/>

<java:new class="com.foo.AppleEater" constructor="MyClass(String, Apple)">
  <java:args>#[{name: 'some string arg', apple: vars.apple}]</java:args>
</java:new>

Invoke an Instance Method

In Mule 3:

<expression-component>
  flowVars.appleEater.chew(500)
</expression-component>

In Mule 4:

<java:invoke class="com.foo.AppleEater" method="chew(Integer)" instance="#[vars.appleEater]">
  <java:args>
    #[{chewingTime: 500}]
  </java:args>
</java:invoke>

The invoke functionality can also be used through DataWeave functions:

<set-payload value="#[Java::invoke('com.foo.AppleEater', 'chew(Integer)', vars.appleEater, {chewingTime: 500})]"/>

To use the Java module, simply add it to your application using the Studio palette, or add the following dependency to your pom.xml file:

<dependency>
  <groupId>org.mule.module</groupId>
  <artifactId>mule-java-module</artifactId>
  <version>1.0.0</version> <!-- or newer -->
  <classifier>mule-plugin</classifier>
</dependency>

To get in-depth knowledge, enroll for a live free demo on Mulesoft Online Training

Mule 4: Error Handlers and Types

In Mule 4, error handling is no longer limited to a Java exception handling process that requires you to check the source code or force an error in order to understand what happened.

Though Java Throwable errors and exceptions are still available, Mule 4 introduces a formal Error concept that’s easier to use. Now, each component declares the type of errors it can throw, so you can identify potential errors at design time.

Mule Errors

Execution failures are represented with Mule errors that have the following components:

  • A description of the problem.
  • A type that is used to characterize the problem.
  • A cause, the underlying Java Throwable that resulted in the failure.
  • An optional error message, which is used to include a proper Mule Message regarding the problem. For more info Mulesoft Training

For example, when an HTTP request fails with a 401 status code, a Mule error provides the following information:

Description: HTTP GET on resource ‘http://localhost:36682/testPath’ failed: unauthorized (401)

Type: HTTP:UNAUTHORIZED

Cause: a ResponseValidatorTypedException instance

Error Message:  { “message” : “Could not authorize the user.” }

Error Types

In the example above, the error type is HTTP:UNAUTHORIZED, not simply UNAUTHORIZED. Error types consist of both a namespace and an identifier, allowing you to distinguish the types according to their domain (for example, HTTP:NOT_FOUND and FILE:NOT_FOUND). While connectors define their own namespace, core runtime errors have an implicit one: MULE:EXPRESSION and EXPRESSION are interpreted as one.

Another important characteristic of error types is that they might have a parent type. For example, HTTP:UNAUTHORIZED has MULE:CLIENT_SECURITY as the parent, which, in turn, has MULE:SECURITY as the parent. This establishes error types as specifications of more global ones: an HTTP unauthorized error is a kind of client security error, which is a type of a more broad security issue. Learn more from Mule 4 Training

These hierarchies mean routing can be more general, since, for example, a handler for MULE:SECURITY will catch HTTP unauthorized errors as well as OAuth errors. Below you can see what the core runtime hierarchy looks like:

All errors are either general or CRITICAL, the latter being so severe that they cannot be handled. At the top of the general hierarchy is ANY, which allows matching all types under it.

It’s important to note the UNKNOWN type, which is used when no clear reason for the failure is found. This error can only be handled through the ANY type to allow specifying the unclear errors in the future, without changing the existing app’s behavior.

When it comes to connectors, each connector defines its error type hierarchy considering the core runtime one, though CONNECTIVITY and RETRY_EXHAUSTED types are always present because they are common to all connectors.

Error Handlers

Mule 4 has redesigned error handling by introducing the error-handler component, which can contain any number of internal handlers and can route an error to the first one matching it. Such handlers are on-error-continue and on-error-propagate, which both support matching through an error type (or group of error types) or through an expression (for advanced use cases).

These are quite similar to the Mule 3 choice (choice-exception-strategy), catch (catch-exception-strategy), and rollback (rollback-exception-strategy) exceptions strategies However, they are much simpler and more consistent.

If an error is raised in Mule 4, an error handler will be executed and the error will be routed to the first matching handler. At this point, the error is available for inspection, so the handlers can execute and act accordingly, relative to the component where they are used (a Flow or Try scope):

  • An on-error-continue executes and uses the result of the execution as the result of its owner (as though the owner completed the execution successfully). Any transactions at this point are committed, as well.
  • An on-error-propagate rolls back any transactions, executes, and uses that result to re-throw the existing error, meaning its owner is considered to be “failing.” Get more skills from Learn Mulesoft

Consider the following application where an HTTP listener triggers a Flow Reference component to another flow that performs an HTTP request. If everything goes right when a message is received (1 below), the reference is triggered (2), and the request performed (3), which results in a successful response (4).

If the HTTP request fails with an HTTP:NOT_FOUND error (see 3 below) because of the error handler configuration in inner-flow, the error is propagated (4), and the Flow Reference component fails (2).

However, because primary-flow handles the error with on-error-continue, the Logger it contains (5) executes, and a successful response (a 200 code) is returned (6).

Try Scope

For the most part, Mule 3 only allows error handling at the flow level, forcing you to extract logic to a flow in order to address errors.

In Mule 4, we’ve introduced a Try scope that you can use within a flow to do error handling of just inner components. The scope also supports transactions, which replaces the old Transactional scope.

The error handler behaves as we have explained earlier. In the example above, any database connection errors are propagated, causing the try to fail and the flow’s error handler to execute.

In this case, any other errors will be handled, and the Try scope will be considered successful which, in turn, means that the next processor in the flow, an HTTP request, will continue executing. Learn additional skills from Mule Training

Error Mapping

Mule 4 now also allows for mapping default errors to custom ones. The Try scope is useful, but if you have several equal components and want to distinguish the errors of each one, using a Try on them can clutter your app.

Instead, you can add error mappings to each component, meaning that all or certain kind of errors streaming from the component will be mapped to another error of your choosing.

If, for example, you are aggregating results from 2 APIs using an HTTP request component for each, you might want to distinguish between the errors of API 1 and API 2, since by default, their errors will be the same.

By mapping errors from the first request to a custom API_1 error and errors in the second request to API_2, you can route those errors to different handlers.

The next example maps HTTP:INTERNAL_SERVER_ERROR so that different handling policies can be applied if the APIs go down (propagating the error in the first API and handling it in the second API).

To get in-depth knowledge, enroll for a live free demo Mulesoft Online Training

Amazon S3 Design Center Configuration – Mule 4

Design Center enables you to create apps visually. To use Design Center, work with your Anypoint Platform administrator to ensure that you have a Design environment. For more information, see the Flow Designer Tour.

To create an app in Design Center:

  • Configure the input source (trigger) for your app.
  • Add the connector as a component to process the input for the app.

For information about Amazon S3 fields, see the Amazon S3 Connector Reference.

Configure the Input Source Trigger

To configure a trigger:

  1. In Design Center, click Create.
  2. Click Create new application.
  3. Specify a value for Project name, and click Create.
  4. Exit from Let’s get started by clicking Go straight to canvas.
  5. Click on the Trigger card.
  6. Configure the trigger. For more info Mule Training

You can use the following items as a trigger:

  • The connector’s On Deleted Object operation to initiate access to your app when an Amazon S3 object is deleted.
    • The connector’s On New Object operation to initiate access to your app when an Amazon S3 object is created.
    • HTTP Connector to initiate access to your app when the HTTP Listener accepts a request from a browser or application, such as Postman or CURL.
    • scheduler to initiate access your app at a specific time.

If you use an Amazon S3 connector’s operation as a trigger, enter the name of the Amazon S3 bucket associated with the operation in the Bucket field on the General tab.

Configure the Target Component

  1. Click “+” next to the trigger card.
  2. In Select a component, search for and select the connector name.
  3. Select an operation for the connector.
  4. Enter the required values in the General tab.
  5. If you are:
    1. Using the default Amazon S3 storage, leave the default entries for the Proxy and the Advanced tabs.
    1. Connecting to a different storage than the default AWS S3, specify its URL in the the Advanced tab’s S3 Compatible Storage URL field.
  6. If needed, enter values for other tabs.
  7. Specify access information to the connector resource, as described below.
  8. Click Test to test the connection. Learn more skills from Mule 4 Training

Validating a connection with Test Connection requires that you have permission in AWS IAM to the action s3:ListAllMyBuckets. If you don’t have this permission, the test fails. However, you can still use the connector and the operations to which you have access.

Access to operations on Amazon S3 is further controlled through policies. It is not always possible to validate your credentials before the exact operation for which you have access completes. This can vary based on the bucket name and other parameters. For example, the test connection can fail if your credentials have a restricted policy.

Amazon S3 Connector – Mule 4

Amazon S3 Connector v5.6.x

Anypoint Connector for Amazon S3 (Amazon S3 Connector) provides connectivity to the Amazon S3 API, enabling you to interface with Amazon S3 to store objects, download and use data with other AWS services, and build applications that require internet storage.

Instant access to the Amazon S3 API enables seamless integrations between Amazon S3 and other databases, CMS applications such as Drupal, and CRM applications such as Salesforce. Use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web. You can accomplish these tasks by using the simple and intuitive web interface of the AWS Management Console.

AWS SDK for Java provides a Java API for AWS infrastructure services. The Amazon S3 connector is built using the SDK for Java. Get more details from Mulesoft Training

About Connectors

Anypoint connectors are Mule runtime engine extensions that enable you to connect to APIs and resources on external systems, such as Salesforce, Amazon S3, ServiceNow, and Twitter.

Prerequisites

Before creating an app, you must have access to the Amazon S3 target resource, Amazon Web Services, and Anypoint Platform. You must also understand how to create a Mule app using Design Center or Anypoint Studio, and have AWS Identity and Access Management (IAM) credentials.

For the Amazon S3 operations to work, you need to enable or update the subset of the overall list of actions in the Amazon S3 bucket to specify that the AWS account has access these actions.

To get in-depth knowledge, enroll for a live free demo on Mulesoft online training

How to implement Jenkins Pipeline for MuleSoft Application?

Jenkins is a free open-source continuous integration server written in Java that helps in the integration process following automation. It is a widely used software for developing application. Jenkins Pipeline is a plugin suite that helps in the implementation and integration of Pipelines to Jenkins with continuous delivery.

Pipeline plugin provides capabilities like suspension and resuming the works, managing Pipeline code, sharing libraries, etc. But the continuous integration feature of Jenkins Pipeline enables it to use for various jobs along with automation.

MuleSoft is a lightweight enterprise service bus (ESB) that provides an integration platform for connecting applications, data and other devices in the cloud or on-premise. It is a Java-based platform. Furthermore, it acts as a vendor for businesses to connect their applications on the cloud as well as on-premise platforms.

Moreover, today many companies use this platform to fulfil their integration needs. It helps the business to scale new heights but with highly available resources. Today MuleSoft is a demanding product for most companies. For more info Mule Training

Jenkins Pipeline parameters

Before starting Jenkins Pipeline, they require some parameters. These parameters control the activities of the pipeline such as; the environment it’s deploying into. It also checks its running status within the set of guidelines. These parameters help to run the pipeline in a smooth way.

Furthermore, we will check this using some commands. The following will let us know about this.

pipeline {

    agent any

    parameters {

        booleanParam(defaultValue: true, description: ”, name: ‘userFlag’)

    }

    stages {

        stage(“foo”) {

            steps {

                echo “flag: ${params.userFlag}”

            }

        }

    }

} Get more details from Mulesoft Training

The above commands show how the parameter works. But there are different kinds of parameters such as; BooleanParam, text, file, choice, string, etc. They perform some activities while running the pipeline but with a different approach.

There are three different ways of accessing the parameters from any script. Such as;

  • env.<ParameterName> It returns the parameter called String from various environments.
  • params.<ParameterName> It returns the parameter which is strongly typed.
  • “${<ParameterName>}” This command returns value through interpolation.

Understanding Jenkinsfile

Jenkinsfile is a text file that helps in the creation of Jenkins pipeline. It helps to write different steps for running a Jenkins pipeline. There are many benefits to use Jenkinsfile. Such as;

  • It helps to create pipelines automatically for all the resources.
  • It helps to review code on the pipeline.
  • Jenkinsfile helps to audit the pipeline.

There are two types of syntax to define Jenkinsfile, such as;

  • Declarative
  • Scripted

Declarative: Declarative pipeline syntax helps to create pipelines in an easy way. It contains some predefined set of rules to create a Jenkins pipeline. It offers control of all aspects of pipeline execution.

Scripted: This syntax runs on the Jenkins master which executes in the lightweight executor. It uses different resources in the translation of pipelines into commands.

Both declarative and scripted syntax are different from each other and they serve different purposes. Learn more skills from Mulesoft Certification

Jenkins Pipeline example

There are different kinds of examples for Jenkins Pipeline. A few of them are as follows. ANSI Color Build Wrapper, Archive Build Output Artifacts, Artifactory Gradle Build, Artifactory Maven Build. Furthermore, there are External Workspace Manager, Gitcommit, Load From File, Jobs In Parallel and so on.

The above mention names are the Jenkins pipeline examples. In addition, there are many more things. Each of them serves a different purpose for the Jenkins pipeline.

How to deploy Mule application

Mule applications are deployed to an engine called Mule Runtime. It supports various deployments such as; CloudHub, Anypoint Runtime, and on-premise Mule instances. Each deployment requires different tools to deploy applications. But here we use two things; Jenkins pipeline and Maven plugin. Beside this, we need to check that all the things are in good condition. Get more skills from Mule 4 Training

Now we look into the deployment of the MuleSoft application using Jenkins Pipeline along with the Maven plugin. Furthermore, it includes various steps and commands to use in this process.

Preparing the Maven plugin for the MuleSoft repository.

<pluginRepositories>

    <pluginRepository>

        <id>mule-public</id>

        <url>https://repository.mulesoft.org/nexus/content/repositories/releases</url&gt;

    </pluginRepository>

</pluginRepositories>

After adding the repository, we can include a plugin with name pom.xml, as

<plugin>

  <groupId>org.mule.tools.maven</groupId>

  <artifactId>mule-maven-plugin</artifactId>

  <version>2.2.1</version>

</plugin>

At first, it needs to set up Jenkins. Cloudhub has to access the Anypoint runtime platform. But it needs to add Jenkins with Anypoint credentials. Later, we will create a Pipeline. Now, we have all the tools that will help to add Jenkins Pipeline to the MuleSoft application. Get more info from Mule Certification

Later we will create a file with the name Jenkinsfile within the root of the application. It allows us to run tests of integration for Mule applications. It also helps to deploy any application to different environments.

It contains various commands in the creation of a file. Such as; running the Unit test on a MuleSoft application, deploying a Mule application to a standalone server. Beside this deploying Mule server through Anypoint Runtime manager, etc.

The next step is to create a Jenkins Pipeline job and configure it with Jenkinsfile. After that, we could see the execution of the pipeline job running and performing the deployments. But it should run perfectly to avoid any discrepancies.   

Deployment results

Here we will see the deployment results. These express the running of Jenkins pipeline file along with the Maven plugin. It shows the process of different deployments and how they help in this area.

Later, we need to check the results of CloudHub deployment.

[INFO] — mule-maven-plugin:2.2.1:deploy (deploy) @ mule-maven-deployment-demo —

[INFO] No application configured. Using project artifact: /Users/manik/.jenkins/workspace/mule-maven-deployment-demo-simple-pipeline/target/mule-maven-deployment-demo-1.0.0-SNAPSHOT-b0.zip

[INFO] Deploying application mule-maven-deployment-demo to Cloudhub

[INFO] Application mule-maven-deployment-demo already exists, redeploying

[INFO] Uploading application contents mule-maven-deployment-demo

[INFO] Starting application mule-maven-deployment-demo

[INFO] ————————————————————————

[INFO] BUILD SUCCESS

[INFO] ————————————————————————

Next, we will check the ARM (Anytime Runtime Manager) deployment results.

[INFO] — mule-maven-plugin:2.2.1:deploy (deploy) @ mule-maven-deployment-demo —

[INFO] No application configured. Using project artifact: /Users/manik/.jenkins/workspace/mule-maven-deployment-demo-simple-pipeline/target/mule-maven-deployment-demo-1.0.0-SNAPSHOT-b0.zip

[INFO] Found application mule-maven-deployment-demo on server local-3.9.0-ee. Redeploying application…

[INFO] ————————————————————————

[INFO] BUILD SUCCESS

[INFO] ————————————————————————

MuleSoft documentation

To get rich text experience and to provide the same to the customers of any business activity, it needs integration of systems. It also requires proper data flow among these systems. MuleSoft provides this integration with different systems through APIs. While documenting there are many things needs to review which may help it well.

Besides this, it offers various deployments including applications. MuleSoft is an API based tool.

Its Anypoint platform helps to increase the productivity of developers by reducing development time. In addition, it ensures reusability and collaboration through open technologies. Get more practical skills from Mule 4 Online Training

Advantages of MuleSoft

There are many advantages of MuleSoft. By using it, we can easily integrate a Java framework with any other type of project. The ESB manages the integration between the different components and applications. It has many components such as; Cloud Hub, Visualizer, Run-time Manager, Monitoring, Connectors, Anypoint, API manager, etc.

It manages all the resources by reducing the resolution time. While performing, it shrinks the integration costs without disturbing the ongoing business processes. It increases the value of enterprise through various tools that enable faster development, implements API and the testing of processes.

Finally, we can deliver great services through which the customer feels better than ever.

Thus, the above writings explain the process of implementing Jenkins pipeline for MuleSoft Application along with using the Maven plugin. Jenkins pipeline plugin supports the implementation of pipelines to Jenkins. In addition to this, it helps in the integration of continuous delivery with the automation process.

MuleSoft is a lightweight integration platform to connect different applications whether they are on cloud or on-premise. Both the platforms are written in Java but they serve for different purposes.

It all gives an overview of using Jenkins pipeline and MuleSoft applications and integrating them for various job works. Furthermore, one who wants to develop a career in this field can opt for MuleSoft Online Training from the best industry experts.

This can be done from various online sources that help to gain knowledge in an easy way. It will be very helpful for upcoming software enthusiasts

4 Ways to Connect Tableau to MongoDB

MongoDB is one of the most popular of a new breed of “NoSQL” databases. NoSQL databases are essentially the opposite of relational databases such as Oracle, SQL Server, MySQL, PostgreSQL, etc. They allow for the storage of unstructured and semi-structured data as well as the ability to maintain flexible schemas.


Here are a few key things to know about MongoDB:

  • Focuses on the storage of “documents” (as opposed to graph databases or other types of NoSQL databases).  
  • Data is stored in JSON format (technically, they store data in a binary representation of JSON they call BSON).
  • Built with developers in mind, so it has lots of tools, APIs, and drivers to meet the needs of virtually any developer. 
  • Because it does not require the creation of rigidly-defined schemas, it provides developers with lots of flexibility. The focus can be shifted from “schema-on-write” to “schema-on-read.” This creates much more agility for developers. 
  • Built with a distributed architecture so it is highly available, scalable, durable, and reliable out-of-the-box. 
  • It is open source, though MongoDB also sells licensed enterprise versions (more on that later). More details Tableau Training

Because of the above, MongoDB is a fantastic generalized database for any type of data from unstructured and structured data (and everything in between). It makes a great platform for small to medium data lakes. And it is one of the top choices of database for many modern developers.

Connecting Tableau to MongoDB

With the continued growth and popularity of the MongoDB platform, we as analytics professionals, will likely cross paths with it at some point and will need to connect Tableau (or other BI tools) to it.

The important thing to remember here is that Tableau expects data to come in a relational format—tables with columns and rows that are related to other similarly structured tables.

But MongoDB is not relational, so that immediately creates some challenges. Some restructuring of the data will inevitably be necessary for it to be consumable in Tableau. Thus, connecting to MongoDB is a bit trickier than connecting to more commonly used relational databases. Get more skills from Tableau Server Training

In this blog, I’m going to provide you with 4 options for connecting Tableau to your MongoDB data. For my examples, I’ll be using an instance of MongoDB Atlas, Mongo’s cloud database-as-a-service offering. You can connect to it using the following connection details:

Cluster Name: Mongo-shard-0/mongo-shard-00-00-pw3el.mongodb.net:27017,mongo-shard-00-01-pw3el.mongodb.net:27017,mongo-shard-00-02-pw3el.mongodb.net:27017

Username: nosql

Password: nosql

The cluster has a number of sample databases installed already, one of which is similar to Tableau’s Superstore, called sample_supplies; it contains a single collection called sales, which I’ll be using in the examples below.

Note: What we call Tables in relational databases are called Collections in MongoDB. Rows or Recordsare known as Documents.

Option 1: Export Data as JSON

The first option is to export data out of MongoDB into JSON files, then leverage’s Tableau’s native JSON connector. To do this, we’ll need to do the following:

1) On the computer running MongoDB, open a command line. If you cannot connect to the computer, as in the case with our Atlas instance, you’ll need to install the MongoDB utilities on your computer. To do this, download MongoDB and choose the custom install option. Deselect all options except for the Import/Export tools.

2) Open a command line and navigate to the directory in which the utilities were installed (on Windows, it should be something like this: C:\Program Files\MongoDB\Server\4.0\bin). Learn more skills from Tableau Advanced Training

3) Run mongoexport using the following syntax (red text indicates that we need to plug in our own values here):

mongoexport –host <Cluster or Host Name> –ssl –username <Username> –password <Password> –authenticationDatabase admin –db <Database Name> –collection <Collection Name> –type json –out <Output JSON File Name>

To export the sales collection from the sample_supplies database in our Atlas cluster, use the following command:

mongoexport --host Mongo-shard-0/mongo-shard-00-00-pw3el.mongodb.net:27017,
mongo-shard-00-01-pw3el.mongodb.net:27017,
mongo-shard-00-02-pw3el.mongodb.net:27017 --ssl --username nosql --password nosql --authenticationDatabase admin --db sample_supplies
--collection sales --type json --out c:\sales.json

4) Use Tableau’s JSON file connector and select the json file. Since the JSON file is not structured in a relational format, Tableau will prompt you to select which “schemas” you wish to include.

To get in-depth knowledge, enroll for a live free demo on Tableau Online Training

Mule as an ESB for .NET application connectivity

The difficulty with .NET connectivity

Connecting systems, exchanging data and arranging processes across multiple disparate systems are challenging and time-consuming tasks. Often, an Enterprise Service Bus (ESB) is employed to manage the communication and interactions of applications and services.

Though an ESB makes it easy for applications composed in different programming languages to interoperate, there are not many .NET Web Services compatible ESBs. Most ESBs tend to be focused on Java interoperability, resulting in a need for Java developers or retraining in order to create the connectivity a business needs from scratch.

To overcome this challenge, Microsoft offers the Biztalk integration broker to help companies integrate and manage business processes. Biztalk is not an actual ESB, but rather a middleware that can be configured to perform functions of an ESB.

It is a heavyweight and complex solution, lacking agility and integration with all of Microsoft’s offerings. Also, because it runs exclusively on a Microsoft platform, it limits the heterogeneous applications and services that it can employ.

Moreover, with Microsoft’s lack of focus and investment on Biztalk, the future of the Biztalk Server looks bleak. For more details Mulesoft Training

Without a powerful ESB, organizations often turn to point-to-point integration which is difficult, fragile, and complex. Updates require experienced developers to manually make changes to code, leaving the infrastructure vulnerable to errors.

Additionally, infrastructure complexity increases as the number of applications, systems, and services residing on-premises and in the cloud increases, creating a tangled web of connections. This tangled system becomes fragile, as the slightest mistakes can create complications.

.NET connectivity with Mule as an ESB

MuleSoft supports .NET compatibility through its lightweight language agnostic Enterprise Service Bus. Mule as an ESB provides native language support to .NET services and applications. Transports, which provide functionality and carry information from app to app within Mules ESB are powered by connectors.

By leveraging a library of Anypoint Connectors, businesses can easily create repeatable integration solutions to synchronize data across multiple platforms. Moreover, with Mule as an ESB, securing a web service through an HTTP web server, authentication tokens in the SOAP header, or via WS-Security is can be done in a few simple configuration steps.

Mule Enterprise Service Bus serves as the middleware component to create interoperability to a number of .NET web services:

Microsoft SQL

MS SQL Server is a database management system that functions to store and retrieve data regardless of where it may reside. Mule as an ESB can create connectivity to the database through its JDBC (Java Database vity) Transport.

JDBC is essentially an API, enabling users to connect databases, and spreadsheets and perform CRUD (create, read, update, delete) operations on database records. The JDBC Transport allows organizations to send and receive messages with a database using the JDBC protocol.

Using the transport is painless, simply place the endpoint within the Mule flow. To learn more, explore our JDBC documentation. Get more info from Mule Training

MSMQ

Microsoft Message Queuing enables individual applications running at different time frames to communicate across distinct networks and systems. It provides temporary storage for messages, allowing them to be sent or received as conditions permit. This queuing system works even if the network is temporarily offline.

MuleSoft offers numerous Anypoint connectors to create connectivity. The connectors can be utilized as messaging protocol, allowing applications running on disparate servers to communicate.

Anypoint connectors allows for communication across networks and systems that may not always be connected; a useful tool for mission-critical processes.

MS dynamics

MS Dynamics is a family of Enterprise Resource Planning (ERP) and Client Relationship Management (CRM) software that provides finance, analytics, accounting, sales, and marketing tools. Integrating MS Dynamics provides enterprises agility, flexibility, and visibility.

MuleSoft offers an Anypoint Connector to create compatibility between MS Dynamics and business systems such as accounting, marketing, sales, customer service, and retail to enable data migration, synchronization of backend systems, and seamless connectivity to other Microsoft offerings.

The connector solves migration and integration challenges between systems and data sources, on-premises and in the cloud. Learn more skills from Mule 4 Certification

SharePoint

SharePoint is a Content Management System (CMS) that provides tools for document management, collaboration, portals, and social networks. Employing Mule as an ESB and utilizing the SharePoint Connector allows business to integrate SharePoint with other MS offerings, creating seamless connectivity throughout the enterprise.

Active directory

Active Directory stores, organizes, and provides access to information for Windows networks. It provides secure authentication and authorization of users and services, as well as location transparency, object data, rich query, and high availability.

Because Active Directory makes use of LDAP, businesses can utilize Mule ESB to enable LDAP authentication. Moreover, with Security Manager, organizations can also configure security managers to allow authentication.

Mule as an ESB for .NET application connectivity

Mule as an ESB offers a platform to create seamless connectivity on-premises or in the cloud for .NET applications. Employ Mule as an ESB for .NET connectivity to create repeatable integrations to streamline business processes and make data available across the enterprise.

With an easy to use graphical design environment, Anypoint Studio has businesses up and running quickly. DataWeave, a graphical interface, works with Studio to simplify data mapping and transformations.

A large library of Anypoint Connectors and out-of-the-box support for enterprise integration patterns, transports, and protocols, make creating connectivity with Mule ESB is easy. Moreover, with a large community of developers, the learning curve is greatly reduced.

Mule Enterprise Services Bus provides businesses the tools to create .NET connectivity to allow for seamless integration throughout the enterprise. With Mule as an ESB, businesses can efficiently access information and create connectivity to disparate systems.

To get in-depth knowledge, enroll for a live free demo on Mulesoft Online Training

Today, we will see the five facts that will shape the future of DevOps

The Shift of Focus From CI Pipelines to DevOps Assembly Lines

Pipelines show you a complete visualization of your app from source control to production. You can see everything in a single pane of glass.

It not just about doing CI now, it is about CD (continuous delivery); organizations are investing their time and effort into understanding more about automating their complete software development process. In 2020, the shift is going to happen from just a CI pipelines to DevOps assembly lines.

Automation Will Become the Primary Focus

In DevOps, we talk a lot about automation. If possible, zero-touch automation is what the future is going to be. It doesn’t mean you have to automate everything, but if you have to, then you should be able to do it.

Understanding the 6 C’s of the DevOps cycle and making sure to apply automation between these stages is the key, and this is going to be the main goal in 2019.

Testers Are Expected to Learn to Code

Testers who know how to code and automate scripts to test various cases are in huge demand in DevOps. If you are a tester and in a dilemma over whether to learn coding or not, we recommend learning to code.

Understanding different DevOps tools and automating scripts plays a vital role in the software development today and this is going to dominate in 2019.

Testers are expected to perish if they don’t learn to code and write their own automated test scripts.

Manual testing will become obsolete in 2019 and it consumes a lot of time. Automation in testing not only increases the efficiency but also ensures the features are delivered faster to the market. for more learn DevOps Training

Increase in the Adoption of Microservices Architecture

DevOps and microservices lately are going hand in hand. Microservices are independent entities and hence doesn’t create any dependencies and break other systems when something goes wrong.

Microservices architecture helps companies make deployments and add new features easily.

Companies are expected to move to microservices architecture to increase their runtime and efficient delivery. Don’t just follow others because they adopted it, know yourself and understand why you should adopt a microservices architecture.

More Companies Are Expected to Opt for Enterprise Versions

There are many companies that are still in a dilemma of whether to build or buy. But, we recommend you to do what you are best at and buy the tools as per your requirements. This not only helps you focus on your goals but also increases the productivity by entirely depending on the 3rd party platform.

Many companies are now going for enterprise versions to get their own infrastructure and make sure security is in the best hands possible.

Kubernetes Is Going to Evolve Significantly

Kubernetes has become the fastest growing container technology because of its offerings and ease to use. Kubernetes has built a great open source community around it. Around the world, many CIO’s and technologists have adopted to use Kubernetes and it is expected to evolve in 2019.

Recently, leading up to KubeCon + CloudNativeCon North America (December 6–8, 2017), the Cloud Native Computing Foundation conducted a survey and shared how the Container Orchestration Landscape is changing and moving towards Kubernetes.

Security Will Become the Primary Focus — DevSecOps

The CICD pipeline makes it possible to employ rapid changes daily to address customer needs and demands. The CI/CD pipeline can be automated as well, and hence Security has to be a design constraint these days.

Thinking security right from the beginning requires security to be built into software instead of being bolted on, Security is no more an add-on.

Recently we have seen a rising trend of DevSecOps, DevSecOps is about injecting security first in the life cycle of application development, thus lessening vulnerabilities and bringing security closer to IT and business objectives.

This model assumes everyone is responsible for security and hence less noise and dilemma on who did what and what went wrong. Get more skills from Devops Online Course

While DevOps and security should go hand in hand, making sure your developers are using the same dependencies, environments and software packages throughout the software development process is a must and hence it is very crucial to have a artifacts repository manager in place.

AI & ML Will Foster DevOps Growth

AI and ML are perfect fits for a DevOps culture. They can process vast amounts of information and help perform menial tasks, freeing the IT staff to do more targeted work.

They can learn patterns, anticipate problems and suggest solutions. If DevOps’ goal is to unify development and operations, AI and ML can smooth out some of the tensions that have divided the two disciplines in the past.

To get in-depth knowledge, enroll for a live free demo on DevOps Online Training

Tableau Desktop vs Tableau Public vs Tableau Reader vs Tableau Server

Tableau Public

This is essentially a free version of Tableau visualization software. It allows you to use the majority of the software functions.You can create visualizations and connect to CSVs, Text and Excel documents.

However, the largest difference is that Tableau Public does not allow you to save your workbooks locally. You have to save them on the publicly which means that everyone can see your data since it’s saved on the cloud.

Tableau Desktop

Tableau desktop offers all the full features of software. Your workbooks can be share locally. This version allows you the connect to different file types, create extracts of the data sources and save your Tableau workbooks locally and publicly.

The only thing that is missing here is the data connection types. You will not be able to have a direct connection to a database or web and software APIs.

Tableau Reader

Tableau Reader allows you to read the Tableau file types. If you want to share your workbook by sending a file, the receiver will need a Tableau reader to open the document. Without the reader, you may need share it publicly or convert the workbook into a PDF format. Learn more info from Tableau Training

Tableau Server

Tableau Server allows users to save workbooks securely across the organization using a secure server. This eliminates the need for a user to have to share the workbook publicly. However, this comes at additional cost.

To get in-depth knowledge, enroll for a live free demo on Tableau online Training

Overview of MuleSoft load balancer architecture

Load balancer:

It is a process of distributing network traffic across multiple servers. This ensures no single server bears heavy demand. Application responsiveness increases by spreading the work evenly in the load balancers. The load balancers also have additional capabilities including security and application. In this article, let us see about MuleSoft load balancer architecture.

 MuleSoft Cloudhub:

CloudHub is a software integration platform as a service (iPaaS).  Through this, you can deploy advanced cloud-based cross-cloud integration applications. You can also build new APIs in addition to existing data sources. Integrate on-site apps with cloud services, and more from Mule Training

Mulesoft cloud hub comes with two types of load balancers.

  • Shared load balancer
  • Dedicated load balancer

 Shared Mulesoft Load balancer architecture:

CloudHub offers a virtual MuleSoft load balancer architecture by design. It is usable in all environments. The MuleSoft load balancer architecture provides a basic feature, including balancing TCP load. Cloud hub Shared load balancers do not require you to set up custom SSL certificates or rules for the proxy.

Besides, shared load balancers have lower speed limits, which help to ensure the stability of the platform. MuleSoft tracks and scales those limits as appropriate regularly. The users should pay tariffs based on their area.

If you distribute a job application to staff in several regions, the rate cap may be different for each region. The Mulesoft load balancer architecture is mentioned in the figure below.

Dedicated Mulesoft load balancer architecture:

CloudHub dedicated load balancers (DLBs) is an optional Anypoint Platform feature that allows you to redirect external HTTP and HTTPS traffic to multiple Mule applications in a Virtual Private Cloud (VPC) for CloudHub employees. Get more info From Mulesoft Training

Dedicated load balancers allow you to:

  • Manage load balancing among the various CloudHub employees running your application.
  • Defines SSL configurations to include unique certificates, and optionally enforces SSL client authentication in two directions.
  • Set up proxy rules, which map your applications to custom domains.
  • That allows you to host your applications within a single domain.

Mulesoft Load balancer architecture DLB:

Shared load balancer v/s dedicated load balancer:

  • The External IP sits directly on the Load Balancer, with the Shared Load Balancers.The VIP is an internal Pc, with a Dedicated Load Balancer. To have external entry, the firewall must be fitted with a MIP / NAT. This will connect to the internal VIP.
  • A customer wants to use a Load Balancer to access an internal VIP to the VPN platform via their domain,  a dedicated Load Balancer is required. Connection to an internal VIP on the Shared Load Balancer over a network to a VPN domain is not feasible.
  • Shared  Load Balancers are in a pair of High Accessibility. When any node goes down, then no downtime will occur. Dedicated load balancers can be mounted on request into an HA pair.
  • Dedicated Load Balancer service is limited in that Customer Care engineers must confirm the state of infrastructure and virtual machine status. Then the customers are responsible for both Day 1, 2 and N operational services.

Take your career into new heights with Mule 4 Training

 When should you use a dedicated Load balancer?

In the previous paragraph you may have come to an idea of shared and dedicated load balancers. In a shared load balancer, you deploy the application under the hood in CloudHub. Then a new runtime worker from Mule is developed. At the same time, a default Mulesoft load balancer architecture will generate a route to the requests for the API to the worker(s) of Mule. For example, the public URL would be http(s):/space-station.cloudhub.io. It is the name for the deployment i.e. space station. The API will be open to the public and will be load balancing when assigned to several employees. Make sure you connect the URL to A Cloud hub.

A shared load balancer API will be open to the public.If I want to use the domain for my employees and organization, then I should go for a dedicated load balancer. This helps in controlling and managing your API as per the organization’s policies.

Configuring dedicated Mulesoft load balancer architecture:

  • Open Anypoint platform, click runtime manager.
  • You should click on the load balancer. Then click on create a load balancer.
  • Then give a name to your load balancer.
  • Then you should select the target AnyPoint VPC from the drop-down list.
  • In the Timeouts in the second’s field, specify the amount of time DLB waits for a response.
  • Give the domain routing as per requirements.
  • Perform Inbound http selection of a  load balancer
  • Add a certificate.
  • Then click on create the load balancer.

Conclusion:

In this article, I discussed the  Mulesoft load balance architecture.  You can use a load balancer to manage your data in the cloud. You can learn more about the MuleSoft load balancer architecture and work through Mulesoft Online Training.

Design a site like this with WordPress.com
Get started