Are you ServiceNow Admin, Check these Plugins

Plugins are software components that provide specific features and functionalities within a ServiceNow instance.

List of all plugins that you can activate if you have the admin role.

If a plugin does not appear on this list, it may require activation by ServiceNow personnel. Request these plugins through the HI Customer Service System instead of activating them yourself.

  • For steps on activating a plugin yourself.
  • For steps on requesting a plugin that you cannot activate yourself.
PluginIDDescriptionStateHas Demo DataDependencies
Action Status Automationcom.sn_action_statusThis plugin tracks blocking records created for tasks and updates the action status indicators on the task list.ActivetrueFor more info Servicenow Online Training
Activity formattercom.glide.ui_activity_formatterActivefalse
Advanced Work Assignmentcom.glide.awaAutomatically assigns work items to agents based on their availability, capacity, and skills.Activefalsecom.snc.skills_management
Advanced Work Assignment for CSMcom.sn_csm.awaConfiguration data supporting routing, queuing, and assignment of CSM Cases.Activefalsecom.sn_customerservice,com.glide.awa
Advanced Work Assignment for Incidentscom.snc.incident.awaDefault configuration to support Advanced Work Assignment for IncidentActivefalsecom.glide.awa,com.snc.agent_workspace.itsm
Agent Chatcom.glide.interaction.awaEnables Workspace Agent Chat and the Chat service channel in Advanced Work Assignment.Activetruecom.glide.interaction,com.glide.awa
Agent Intelligence Changed inNew York)com.glide.platform_mlRenamed to Predictive Intelligence.Activefalsecom.glide.platform_ml_pa
Agent Intelligence Reports Changed inNew York)com.glide.platform_ml_paRenamed to Predictive Intelligence Reports.Activefalse
Agent Schedulecom.snc.agent_scheduleEnables customer service agents and field service technicians to see work schedules and assignments and also add personal events such as meetings or appointments.Activefalsecom.snc.app.agent_calendar_widget
Agent Workspacecom.agent_workspaceAll-in-one app enabling CSM/ITSM agents to provide world-class service at light speedActivefalsecom.glide.uxbuilder,com.glide.graphql,com.glide.interaction,com.snc.agent_workspace.config,com.snc.agent_workspace.ribbon,com.snc.agent_workspace.list,com.snc.agent_workspace.form,com.snc.agent_workspace.global_search,com.snc.agent_workspace.declarative_actions
Agent Workspace – Listcom.snc.agent_workspace.listWorkspace List ConfigurationsActivefalsecom.snc.agent_workspace.config
Agent Workspace – Ribboncom.snc.agent_workspace.ribbonWorkspace Ribbon ConfigurationsActivefalsecom.glide.uxbuilder,com.snc.agent_workspace.config,com.sn_resolutionshaper
Aggregate Web Servicecom.glide.web_service_aggregateProvides SOAP Access to GlideAggregate functionality.Inactivefalse
Agile Development 2.0com.snc.sdlc.agile.2.0The Agile Development 2.0 plugin provides enhanced functionality on top of Agile Development. If you already have a customized version of Agile Development, delete the customizations before activating “Agile Development 2.0” to ensure that all features work properly. Please refer the documentation for detailed steps to delete the customizations.Inactivetruecom.snc.project_portfolio_suite
Agile Development — Unified Backlogcom.snc.sdlc.agile.multi_taskEnables you to maintain a centralized backlog containing records of different task types, such as defects, problems, incident tasks, and stories.Include any tasks into your agile workflowInactiveFalsecom.snc.sdlc.agile.2.0
Agile – Scaled Agile Framework – Essential SAFecom.snc.sdlc.safeScaled Agile Framework was designed to apply Lean-Agile principles to the entire organization. Essential SAFe is most basic configuration of the framework and it provides the minimal elements necessary to be successful with SAFe: manage your agile release train backlog, plan program increments,Activetruecom.snc.sdlc.ranking,com.snc.sdlc.agile.2.0

To get in-depth knowledge, enroll for live free demo on Servicenow Training

Configuring Properties in Mule 4

You can configure properties, such as property placeholders and system properties.

Property Placeholders

You can use Ant-style property placeholders in your Mule configuration. For example:

<email:smtp-config name="config">
    <email:smtp-connection host="${smtp.host}" port="${smtp.port}"/>
</email:smtp-config>

The values for these placeholders can be made available in a variety of ways, as described in the sections that follow.

Global Properties

You can use the <global-property> element to set a placeholder value from within your Mule configuration, such as from within another Mule configuration file. You can use the global property syntax to reference the values from a .yaml or .properties file, and create new (global) properties that depends on configuration properties, or secure configuration properties. To reference configuration properties, read the section on properties files. for more additional info Mulesoft Training

<global-property name="smtp.host" value="smtp.mail.com"/>
<global-property name="smtp.subject" value="Subject of Email"/>

Properties Files

To load properties from a custom file, you can place your custom properties file at src/main/resources and use the tag <configuration-properties>:

<?xml version="1.0" encoding="UTF-8"?>

<mule xmlns="http://www.mulesoft.org/schema/mule/core"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd">

<configuration-properties file="smtp.yaml"/>

<flow name="myProject_flow1">
    <logger message="${propertyFromFile}" doc:name="System Property Set in Property File"/>
</flow>

To load multiple properties files simply define a <configuration-properties/> tag for each file you want to load.

  • If a property is defined in more than one file referenced in <configuration-properties/> tags, the first definition will be preserved.

These files must be located at src/main/resources, inside your Mule project, or you can also use absolute paths. Learn more skills from Mule Training

Supported Files

The Configuration Properties supports both YAML configuration files and Properties files. The recommended approach is to use a YAML configuration files, because it allows the addition of type validations and autocompletion.

String is the only supported type.

YAML file example:

<configuration-properties file="ports.yaml"/>

Where ports.yaml is:

smtp:
    port: "8957"
http:
    port: "8081"

Properties file example:

<configuration-properties file="ports.properties"/>

Where ports.properties is:

smtp.port=8957
http.port=8081

File Properties

The placeholder value can also be the entire content of the file. The placeholder value becomes the string value, for example:

properties-file.txt

Some content
<mule:set-payload value="${file::properties-file.txt}"/>

The payload’s value becomes "Some content". Just like other properties files, these files must be located in src/main/resources, inside your Mule project. Absolute paths can also be used.

This practice is useful for modularizing the configuration file: You can extract large contents from the config file, SQL queries, or transformations to make the config file clearer, and you can reuse the contents.

System Properties

You can set JDK system properties when running Mule On-Premises, and later use these properties to configure components or modules in a Mule application. There are two ways to specify properties:

  • From the command line, when starting the Mule instance:
mule -M-Dsmtp.username=JSmith -M-Dsmtp.password=ChangeMe
  • Editing the wrapper.conf file located in the $MULE_HOME/conf directory, adding entries for each property:
wrapper.java.additional.999=-Dsmtp.username=JSmith
wrapper.java.additional.1000=-Dsmtp.password=ChangeMe

Custom Properties Provider

It is possible to create a custom properties provider implementation using the Mule AP.

Setting System Properties in Anypoint Studio

You can add properties when you launch your project on Anypoint Studio, through the Run Configurations menu:

  1. Right-click your project in Package Explorer.
  2. Click Run As > Run Configurations.
  3. Pick the Arguments tab.
  4. Add your arguments to the VM arguments field, preceding property names with -D:

Your properties are now available each time you deploy your app through Studio. You can then reference them with the following syntax:

To get indepth knowledge, enroll for live free demo on Mulesoft Online Training

<logger message="${propertyFromJVMArg}" doc:name="System Property Set in Studio through JVM args"/>

Environment Variables

Environment variables can be defined in various different ways, there are also several ways to access these from your apps. Regardless of how an environment variable is defined, the recommended way to reference it is through the following syntax:

${variableName}

Environment Variables From the OS

To reference a variable that is defined in the OS, you can simply use the following syntax:

<logger message="${USER}" doc:name="Environment Property Set in OS" />

Setting Environment Variables in Anypoint Studio

You can set variables in Studio through the Run Configuration menu:

  1. Right-click your project in Package Explorer.
  2. Select Run As > Run Configurations.
  3. Pick the Environment tab.
  4. Click the New button and assign your variable a name and value.

Your variable is now available each time you deploy through Studio. You can reference it with the following syntax:

<logger message="${TEST_ENV_VAR}" doc:name="Environment Property Set in Studio"/>

Setting the Properties File Dynamically

A common configuration use case is to set the file to depend on a property (for example, env) to determine which file to use, for example, to use a development-properties file in development stage or a production file.

<configuration-properties file="${env}-properties.yaml"/>

This way, the value of the property env determines which file to use to load the configuration properties. That env property can be set by a global property, system property, or environment property.

You can use global properties as a way to define default values for configuration properties. System and environment properties with the same name as a global property will override that global property.

<global-property name="env" value="dev"/>

<configuration-properties file="${env}-properties.yaml"/>

This way, the default value for the env property is dev, which can still be overridden with a system or environment property. Please note that this configuration is required for metadata resolution in Anypoint Studio.

If you do not define default values for the properties that are passed through the command line, you receive an error while creating an application model for all message processors that depend on them.

Another thing to consider is that placeholders of a configuration property setting cannot depend on the properties loaded from another configuration property.

In the example above, the property env couldn’t have been defined in a configuration property. The example below is not correct:

<configuration-properties file="file-with-env-property.yaml"/>
<configuration-properties file="${env}-properties.yaml"/>




This also includes other type of properties, such as Secure Configuration Properties or Custom Configuration Properties.

Setting Properties Values in Runtime Manager

If you deploy your application to Runtime Manager, you can also set properties through the Runtime Manager console. These can be defined when Deploying to CloudHub, or on an already running application.

To create an environment variable or application property:

  1. Log in to your Anypoint Platform account.
  2. Click Runtime Manager.
  3. Either click Deploy Application to deploy a new application, or select a running application and click Manage Application.
  4. Select the Properties tab in the Settings section.

See Managing Applications on CloudHub and Secure Application Properties for more details.

Properties Hierarchy

Configuration properties can be overwritten. The Mule runtime engine uses the following hierarchy to determine which properties take precedence if they have the same name. In this hierarchy, system properties are at the top, so they take precedence over the other types of properties.

  1. Deployment properties
  2. System properties
  3. Environment properties
  4. Application properties (includes configuration properties, secure configuration properties, and other custom configuration properties)
  5. Global Properties

So, for example, if a configuration property named size is defined as a system property, and there is also an application configuration property named size, the value for the application is the value of the property with the most precedence (in this case, the system property).

Also, a property can derive its value from other properties that are higher in the hierarchy. Therefore, an application property can derive its value from environment, system, and/or deployment properties. A system property can derive its value from a deployment property, and so on.

For example, if there is a system property named env, an application configuration property can have the value file.${env}.xml. On the other hand, an application property cannot depend on an application property’s value unless it’s defined in the same file.

For example, a Secure Configuration Property cannot depend on a Configuration Property.

Take your career to new heights of success with Mulesoft Certification

Validating Documents against an XSD Schema with the XML Module

XML Module:

The XML Module can process and extract data from an XML document. Although DataWeave is recommended for most XML-related use cases, the XML module should be used for cases that involve the use of XML standards such as XSLT, XPath and XQuery, or XSD.

To use the XML module, you simply add it to your Mule app through the Studio or Flow Designer UI, or you can add the following dependency in your pom.xml file: Learn more from Mulesoft Online Training

<dependency>

    <groupId>org.mule.modules</groupId>

    <artifactId>mule-xml-module</artifactId>

    <version>1.1.0</version> <!– or newer –>

    <classifier>mule-plugin</classifier>

</dependency>

Validate Against a Schema

When using the XML Module to validate against a schema that has references to other local schema files, validation can fail because the access was restricted by the expandEntities as it was using the default value of NEVER. The error message is: The supplied schemas were not valid. schema_reference: Failed to read schema document NMVS_Composite_Types.xsd, because file access is not allowed due to restriction set by the accessExternalSchema property.

You can eliminate this issue by adding expandEntities="INTERNAL" to the xml-module:config element.

Validating Documents against an XSD Schema with the XML Module

The <xml-module:validate-schema> operation validates that the input content is compliant with a given schema. This operation supports referencing many schemas (using comma as a separator) that include each other.

This example shows how to work with XML that contains the script of the play Othello by William Shakespeare. The XML file looks something like this:

<flow name=”process”>

    <xml-module:validate-schema schemas=”schema1.xsd, schema2.xsd” />

    <flow-ref name=”processValidDocument” />

</flow>

By default, this operation looks for the input document at the message payload level. However, you can supply your own input, for example:

<flow name=”process”>

    <file:read path=”document.xml” target=”xmlDoc” />

    <xml-module:validate-schema schemas=”schema1.xsd, schema2.xsd”>

        <xml-module:content>#[vars.xmlDoc]</xml-module:content>

    </xml:module:validate-schema>

    <flow-ref name=”processValidDocument” />

</flow>

Get more in-depth knowledge, enroll for live demo on Mulesoft Training

Note that you can use the <xml-module:validate-schema> component inside a <validation:all> element.

Handling the Validation Error

If the validation is successful, nothing happens, and the flow continues to the next operation. If it fails, an XML-MODULE:SCHEMA_NOT_HONOURED error is raised.

The XML-MODULE:SCHEMA_NOT_HONOURED error is a complex one. Because the validation can fail for any number of reasons, the error contains a list of Messages. Each message contains a SchemaViolation object, which has the following structure:

SchemaViolation Object

{

  lineNumber: Number,

  columnNumber: Number,

  description: String

}

Consider the following example:

 <flow name=”extractErrorsFromException”>

    <try>

        <xml-module:validate-schema schemas=”schema.xsd” /> (1)

        <error-handler>

            <on-error-propagate type=”XML-MODULE:SCHEMA_NOT_HONOURED”> (2)

                <foreach collection=”#[error.errorMessage.payload]”>

                    <logger level=”ERROR” message=”#[‘At line: $(payload.lineNumber), column: $(payload.columnNumber) -> $(payload.description)’]” /> (3)

                </foreach>

            </on-error-propagate>

        </error-handler>

    </try>

</flow>

ERROR 2018-02-16 14:35:45,722 [[MuleRuntime].cpuIntensive.01: [SchemaValidationTestCase#extractErrorsUs

To get in-depth skills learn more with Mule Training with free demo

Create a parsing rule and Parsing strategies

In discovery patterns, you can use parsing strategies to analyze syntax of the source file. You extract values from parsed files, which allows you later to convert these values into variables.

There are several parsing strategies coming with the base system:

Parsing strategyDescription
Oracle LDAP file XML file INI file Properties file JSON file (custom)Horizontal file parsing strategy (not vertical). You can use this parsing strategy only for text files.
Vertical FileRetrieve text from a structured text file where each set of data spans multiple lines
After KeywordRetrieve text directly following a specific keyword.
Command Line Java StyleRetrieve the value of a command-line parameter using Java-style parameters
Command Line Unix StyleRetrieve the value of a command-line parameter using standard Unix parameters
Position From EndRetrieve text specified by its position from the end of the line
Position From StartRetrieve text specified by its position from the beginning of the line
Regular ExpressionRetrieve text specified by a regular expression
Delimited TextRetrieve text specified by delimiters and position within the line (the most common way to retrieve text from generic text files).

In addition to these parsing strategies, you can create custom parsing strategies to answer the needs of your organization. For more additional info ServiceNow online Training

Create a parsing rule:

Populate output variables defined in a custom activity with payload data returned from an inputs test on an external host or endpoint.

Before you begin

Roles required: activity_admin, activity_creator

About this task

Procedure

  1. Navigate to Workflow > Workflow Editor.
  2. From the Custom tab in the palette, open a custom activity.
  3. In the Activity Designer form, advance to the Output stage.
  4. Drag an output variable from the data structure builder into the Variable name field in the Parsing rules builder.
  1. The parsing rules form appears for the selected variable. By default, the parsing type is set to Direct, which populates the variable with all the data from the selected payload, without parsing the contents. Each template has a specific default parsing source.
  2. Complete the form using the fields in the table.

In this example, the parsing type selected is XML, which allows you to select specific parameters from the payload to parse.

Parsing rules fields
FieldDescription
Parsing sourceSource of the data returned from the target host or endpoint. Each template opens to a specific, default payload. Available choices depend on the execution template selected for the activity. You can also use local variables as a parsing source if a parsing rule has previously been defined for them.
ExpressionExpression used to extract specific data from the selected parsing source. This expression is created from clickable data in the sample payload and appears in the format selected in the Parsing type field. When testing, the expression can return multiple results. Discern which choice gives reliable or predictable results before choosing your expression. Note: The system cannot generate clickable RegEx expressions from sample data. You must write all regular expressions manually.
Variable nameRevised variable name as it is used in the final output expression. The system adds the activityOutput or activityLocal prefix to the variable you specify.
Parsing typeThe language to use for querying the target host’s payload. The selections are: Direct: Maps to the entire content of the payload selected in the Parsing source field, without any parsing. This is the default parsing type.XML: XPath query used for selecting nodes from an XML payload.JSON: JSONPath query for selecting parts of a JSON payload.RegEx: Parsing method that uses a regular expression to extract data from a payload. The RegEx parsing type does not support multi-line parsing and is not case sensitive.
Short descriptionBrief description of this parsing rule.
Sample payload dataSample data from the source containing the data requested. This field is not available for Direct parsing types. After you click Parse sample data, the data in this field cannot be edited, but becomes clickable for the purpose of creating expressions. Click Edit sample data to make the field editable again.
Parsing resultsDisplays the data returned from the source by the selected expression. This field is not available for Direct parsing types.
  1. To retest the inputs, click Get sample payload from test.

This action reopens the test form, allowing you to substitute different test values and create a different payload.

  • Click Save to have the parsing rules overwrite the previous payload with the one you just created.
  • To create an expression for the parsing rule, click the specific parameter you want to see in the sample payload. Learn more skills on Servicenow Training

The value for that parameter appears in the Parsing result field, and the system creates the appropriate expression in the Expression field.

  • Click Submit to save the parsing rule for that variable.

Activity designer parsing sources

This table lists the parsing sources available with each execution template.

Parsing sources
TemplateSource
SOAP Web ServiceexecutionResult.body (Default)executionResult.status_codeexecutionResult.headerexecutionResult.error
JDBCexecutionResult.output (Default)executionResult.errorMessagesexecutionResult.probeCompletedEccIdexecutionResult.totalRows
JavaScript ProbeexecutionResult.payload (Default)executionResult.outputexecutionResult.eccSysIdexecutionResult.errorMessages
PowershellexecutionResult.output (Default)executionResult.tagsexecutionResult.hresultexecutionResult.eccSysIdexecutionResult.errorMessages
REST Web ServiceexecutionResult.body (Default)executionResult.status_codeexecutionResult.headerexecutionResult.error
SFTPexecutionResult.output (Default)executionResult.eccSysIdexecutionResult.errorMessagesexecutionResult.tags
ProbeexecutionResult.output (Default)executionResult.payloadexecutionResult.eccSysId
SSHexecutionResult.output (Default)executionResult.eccSysIdexecutionResult.errorMessagesexecutionResult.tags
JMSexecutionResult.statusexecutionResult.standardHeadersexecutionResult.customHeadersexecutionResult.messagePayloadexecutionResult.eccSysIdexecutionResult.errorMessages

Activity designer parsing rule example

In this example, the parsing rule is configured to populate the activityOutput.ipv4 variable with the value for the IP address from a domain server, using PowerShell.

To get in-depth knowledge, enroll for live free demo on Servicenow Developer Training

Before you begin

Role required: activity_creator, activity_admin

About this task

To generate the sample data, the administrator must actually run the command on the host and then paste the data returned into the Sample payload data field when creating the parsing rule. The administrator can then create an expression that returns IP addresses from that sample in two formats: ipv4 and ipv6. In this example, the system produces two expressions to use for the parsing rule.

Procedure

  1. Navigate to Workflow > Workflow Editor and open the activity that runs on the host.
  2. Click the Inputs tab, and note the Command.

Parsing rule PowerShell inputs command

  1. In a PowerShell console, run the Command on the host to extract the XML sample that contains the values you need.
  2. Copy the data that is returned to the clipboard.
  3. In the activity designer, click the Outputs tab and paste the returned data into the Sample payload data field.

In this example, the data includes IP addresses in two different formats and the domain name.

  • Select the parsing type for the source. In this example, select XML.

Click Parse sample data.

The system displays the XML in the proper format, and it becomes clickable. In this view, the system can translate clicked data from the sample into an expression.

To create the expression, click the elements in the data sample you want to map to the variable.

Based on the sample data you clicked, the system creates two expressions.

Select an expression from the list.

The desired result is the IP address that has a type attribute of ipv4. The system populates the Expression field with this choice.

Click Test expression.

The system parses the payload using the selected expression and returns the requested data in the Parsing result field.

Click Submit.

The view returns to the Outputs tab of the activity designer. The new parsing rule is listed, and a blank row is available for another rule.

Tableau order of operations

Tableau’s order of operation is the best way to calculate the data, of any business. It shows many ways to Arrange the data and Visualize them. These operations are updated, according to the company requirements.

The Sequence of tasks in Tableau Training, in some cases called the query pipeline, is the request in Tableau performs different activities. Activities are otherwise called tasks. 

Many tasks apply channels and Filters, which explains that as you design a view and include channels and filters, those channels Every time execute in the order to built by the request for activities and Operations. 

About the Orders of Operations 

Once in a while, you may expect, that Tableau should compile filters, in a single request, yet the request for tasks directs, that they are finished in a variant request, which gives you sudden Results. 

At the point when this occurs, you can here and there, change the request in tasks are executed in the pipeline. 

The Tableau order of operations has below.

Note: In the request for activities, the latest Date channel is worldwide to the workbook, while setting channels apply per worksheet. The most recent date is resolved soon after the workbook, opens for first use. After information source channels.

Yet before setting channels. By then the date is set, and the most recent date pre-set is utilized as a measurement channel.

Model 1: Convert a Dimension Filter to a Context Filter

This and upcoming model utilize the Sample, Superstore information source offered with Tableau Desktop.

In this model, the view shown to the following Query: Who are the main 10 clients, by all deals, in New York City?

The view contains two measurement channels, one that you make on the General tab in the Filters dialog box, and the other on the Top N tab. The issue is that these channels are executing in a Sequence.

Though you might want a general channel, to be applied before the top N channel. With the goal that the top n channel can follow up on the outcomes as recently separated by the general channel.

To get in-depth knowledge, enroll for live free demo on Tableau online Training

The arrangement is to tell again one of the channels, as a setting channel with the goal that a reasonable request of an order made up.

Here are the means for building this view.

Drag Sales to Columns.

Drag City and [Customer Name] to Rows.

Drag City from the Datasheet once more, this opportunity to Filters. On the General tab in the Filter exchange box, set the channel to show only one value: New York City.

Do this by clicking none and afterward picking New York City.

This makes a general measurement channel.

Make a Screenshot the Sort Descending catch () on the toolbar. Your view presently resembles this:

Note the initial not many names in the rundown: Ash brook, Fuller, Vernon, and many others.

Present drag [Customer Name] from the Datasheet to Filters, and make a Top 10 Filter, to show just the main 10 clients by all-out deals.

After you apply this second channel, the view looks right, yet see that the names indicated are never again equivalent.

Model 2: Convert a Table Calculation to a FIXED Level of Detail Expression

In this model, the view shows the following Query:

What is the percent of the complete transaction, by product classification.

The view contains a measurement channel and table calculations. Tableau applies the measurement channel before executing the table calculations.

To assign the request for these activities, utilize a FIXED degree of detail feels rather than a calculating twice.

Here is the process for building this view.

In another worksheet, drag Sales to Columns.

Drag Sub-Category to Rows.

Right-click SUM (Sales) on Columns and click a Quick table calculation – Percent of Total.

Take a Photoshop the Sort Descending catch () on the toolbar to sort the classifications from most to least.

Click on the Descending button () on the toolbar, to show measure categories in the view.

This is all about the tableau order of operations, in tableau and in the upcoming days. I will try to upload more data about this topic. I think every tableau developer, must and should be aware of this in future. Tableau operations may become separate section like Tableau Desktop, prep, and public.

Best Big Data Analytics Tools

Open-source big data analytics refers to the use of open-source software and tools for analyzing huge quantities of data in order to gather relevant and actionable information that an organization can use in order to further its business goals.

The biggest player in open-source big data analytics is Apache’s Hadoop – it is the most widely used software library for processing enormous data sets across a cluster of computers using a distributed process for parallelism.

Open-source big data analytics makes use of open-source software and tools in order to execute big data analytics by either using an entire software platform or various open-source tools for different tasks in the process of data analytics.

Apache Hadoop is the most well-known system for big data analytics, but other components are required before a real analytics system can be put together.

Best Big Data Analytics Tools

Xplenty:

Xplenty is a data integration platform that requires no coding or deployment. Our Big Data processing cloud service brings immediate results to the entire organization: from designing dataflows to scheduling jobs, Xplenty can process both structured and unstructured data and integrates with a variety of sources, including SQL data stores, NoSQL databases and cloud storage services. Learn more Data Analytics Online Training

Read and process data from relational databases such as Oracle, Microsoft SQL Server, Amazon RDS, PostgreSql. NoSql data stores such MongoDB. Cloud storage file sources such as Amazon S3, and many more.

Xplenty also allows you to connect with online analytical data stores such as AWS Redshift, and Google BigQuery.

Microsoft HDInsight:

Easily run popular open source frameworks—including Apache Hadoop, Spark and Kafka—using Azure HDInsight, a cost-effective, enterprise-grade service for open source analytics. Effortlessly process massive amounts of data and get all the benefits of the broad open source ecosystem with the global scale of Azure.

  • Quickly spin up big data clusters on demand, scale them up or down based on your usage needs and pay only for what you use.
  • Meet industry and government compliance standards and protect your enterprise data assets using an Azure Virtual Network, encryption and integration with Azure Active Directory.
  • Use HDInsight tools to easily get started in your favorite development environment.
  • HDInsight integrates seamlessly with other Azure services, including Data Factory and Data Lake Storage, for building comprehensive analytics pipelines.

 To get in-depth knowledge, enroll for live free demo on Data Analytics Online Course

Talend:

Talend is an open source software platform which offers data integration and data management solutions. Talend specializes in the big data integration. The tool provides features like a cloud, big data, enterprise application integration, data quality, and master data management. It also provides a unified repository to store and reuse the Metadata.

It is available in both open source and premium version. It is one of the best tools for cloud computing and big data integration.

Splice Machine:

Splice Machine, the only Hadoop RDBMS, is designed to scale real-time applications using commodity hardware without application rewrites. The Splice Machine database is a modern, scale-out alternative to traditional RDBMSs, such as Oracle, MySQL, IBM DB2 and Microsoft SQL Server, that can deliver over a 10x improvement in price/performance.

As a full-featured SQL-on-Hadoop RDBMS with ACID transactions, the Splice Machine database helps customers power real-time applications and operational analytics, especially as they approach big data scale.

Plotly:

Plotly’s team maintains the fastest growing open-source visualization libraries for R, Python, and JavaScript.

These libraries seamlessly interface with our enterprise-ready Deployment servers for easy collaboration, code-free editing, and deploying of production-ready dashboards and apps.

 Take your career to new heights of success with an big data online training

Report layouts and queries using Cognos

Report layouts and queries are two key components in every report you run with Cognos Report Studio. The layout defines the report appearance and the query defines the report data.

A layout in Cognos Report Studio is a set of pages that define the appearance and format of a report. When you design the report layout, you can complete the following tasks:

  • Choose the data in a meaningful way, for example, by using lists, cross tabs, charts, or maps.
  • Format the report with borders, colors, images, or page numbers.
  • Specify how the data flows from one page to the next.

Pages are containers for the layout objects that you use to build a report. A page is made up of a page header, page body, and page footer. For more info Cognos Training

You can add layout objects to a page when you create a report. You can use one of the following objects when you build a report in Cognos Report Studio:

  • List – shows data in rows and columns.
  • Crosstab – displays data in a grid with dimensions along the rows and columns, and measures in the cells or intersection points.
  • Chart – data is displayed in a column, bar, or area chart.
  • Map – displays data sets in a collection of layers. Maps often show the intersection of data.
  • Repeater – shows each instance of a certain column or data item in a separate frame.
  • Text – enters verbal information.
  • Block – contains text or other information. Blocks are often used to lay out horizontal bands of information.
  • Table – displays data and other information into a table.

Queries determine what data items appear in the report. Cognos Report Studio automatically creates the queries that you need as you build reports. You can create and modify queries by using the Query Explorer.

To get more in-depth knowledge, enroll for live free demo on Cognos Online training

Connect Tableau to a local JSON file

Make the connection and set up the data source

  1. Start Tableau and under Connect, select JSON File. Then do the following:
  2. Select the file you want to connect to, and then select Open.
  3. In the Select Schema Levels dialog box, select the schema levels you want to view and analyze in Tableau, and then select OK.
  4. On the data source page, do the following:
  5. (Optional) Select the default data source name at the top of the page, and then enter a unique data source name for use in Tableau. For example, use a data source naming convention that helps other users of the data source figure out which data source to connect to.
  6. Select the sheet tab to start your analysis

SON file data source example

Here is an example of a JSON file as a data source using Tableau Desktop on a Windows computer:

Select schema levels

When you connect Tableau to a JSON file, Tableau scans the data in the first 10,000 rows of the JSON file and infers the schema from that process. Tableau flattens the data using this inferred schema. The JSON file schema levels are listed in the Select Schema Levels dialog box. For more info Tableau Online Training

The schema levels that you select in the dialog box determine which dimensions and measures are available for you to view and analyze in Tableau. If you select a child schema level, the parent level is also selected.

Detect new fields

Sometimes, more fields exist in rows that were not scanned to create the inferred schema. If you notice that a field you need is missing under Schema, you can choose to do one of the following:

  • Scan the entire JSON document. It may take a long time for the scan to complete.
  • Select schema levels from the schema listed and then select OK. Tableau reads your entire document and if more fields are found, they are listed in the Select Schema Levels dialog box.

Whenever Tableau detects that new fields are available, for example, during an extract refresh or when Tableau creates an extract after you’ve selected the schema levels, either an information icon near the file name or a notification on the Select Schema Levels dialog box will indicate that additional fields have been found. Learn more skills from Tableau Certification

Change schema levels

You can change the schema levels you selected by going to the data source page and selecting Data > [JSON file name] > Select Schema Level. Or, hover over the file name on the canvas and select the drop-down menu > Select Schema Level.

Union JSON files

You can union JSON data. To union a JSON file, it must have a .json, .txt, or .log extension.

How dimension folders are organized for hierarchical JSON files

After you select the sheet tab, the selected schema levels of your JSON file show under Dimensions on the Data pane. Each folder corresponds with the schema level you selected, and the attributes associated with that schema level are listed as children of the folder.

For example, in the following image, Stars is a dimension under the schema level Stars rated folder, and Day is a dimension under the schema level Visits folder. Category is also a schema level, but because it is a list of values and not a hierarchy of data, it doesn’t require its own folder, but is instead grouped under a parent folder, Example Business. Note that schema levels in the Select Schema Levels dialog box do not map directly to the folder structure in the Data pane. Folders in the Data pane are grouped by object so that you can easily navigate to fields and still have context for where the fields come from.

Tips for working with JSON data

These tips can help you work with your JSON data in Tableau.

  • Do not exceed the 10×10 limit for nested arrays.

A high number of nested arrays creates a lot of rows. For example, 10×10 nested arrays result in 10 billion rows. When the number of rows Tableau can load into memory is exceeded, an error displays. In this case, use the Select Schema Levels dialog box to reduce the number of selected schema levels.

To get in-depth knowledge, enroll for live free demo on Tableau Training

  • A data source that contains more than 100 levels of JSON objects can take a long time to load.

A high number of levels creates a lot of columns, which can take a long time to process. As an example, 100 levels can take more than two minutes to load the data. As a best practice, reduce the number of schema levels to just the levels that you need for your analysis.

  • A single JSON object cannot exceed 128 MB.

When a single object top-level array exceeds 128 MB, you must convert it to a file where the JSON objects are defined one per line.

  • The pivot option is not supported.

What’s New in this Connector Mule 4

Marketo Connector

The Anypoint Connector for Marketo is a closed-source connector which provides a connection between the Mule and Marketo REST APIs. This connector implements all supported v1.0 Marketo API endpoints and provides Mule 4 DataWeave functionality.

Prerequisites

This document assumes that you are familiar with Marketo, Mule, Anypoint Connectors, Anypoint Studio, Mule concepts, elements in a Mule flow, and Global Elements.

You need login credentials to test your connection to your target resource.

For hardware and software requirements and compatibility information, see the Connector Release Notes.

To use this connector with Maven, view the pom.xml dependency information in the Dependency Snippets in Anypoint Exchange. More info at Mulesoft training

What’s New in this Connector

First Mule 4 version.

To Install Connector in Studio

  1. In Anypoint Studio, click the Exchange icon in the Studio taskbar.
  2. Click Login in Anypoint Exchange.
  3. Search for this connector and click Install.
  4. Follow the prompts to install this connector.

When Studio has an update, a message displays in the lower right corner, which you can click to install the update.

To Configure in Studio

  1. Drag and drop the connector to the Studio Canvas.
  2. Click the green plus button to configure the connector
  3. A window appears so that you can configure the following fields: Mulesoft Online Training

Create a Program in Studio 7

XML

XML

<?xml version="1.0" encoding="UTF-8"?>

<mule xmlns:os="http://www.mulesoft.org/schema/mule/os"
xmlns:ee="http://www.mulesoft.org/schema/mule/ee/core"
xmlns:marketo-rest-api="http://www.mulesoft.org/schema/mule/marketo-rest-api"
xmlns:http="http://www.mulesoft.org/schema/mule/http"
xmlns="http://www.mulesoft.org/schema/mule/core"
xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core
http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http
http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/marketo-rest-api
http://www.mulesoft.org/schema/mule/marketo-rest-api/current/mule-marketo-rest-api.xsd
http://www.mulesoft.org/schema/mule/ee/core
http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd
http://www.mulesoft.org/schema/mule/os
http://www.mulesoft.org/schema/mule/os/current/mule-os.xsd">
	<configuration-properties file="mule-app.properties"
	doc:name="Configuration properties"/>
	<http:listener-config name="HTTP_Listener_config"
	doc:name="HTTP Listener config">
		<http:listener-connection host="localhost" port="8081" />
	</http:listener-config>
	<marketo-rest-api:config name="Marketo_Rest_API_Config" doc:name="Marketo Rest API Config" property_basePath="/"
	property_clientId="${clientId}"
	property_clientSecret="${clientSecret}"
	property_host="${host}"
	property_accessTokenUrl="${accessTokenUrl}"
	property_port="${port}"
	property_protocol="${protocol}"/>
	<os:object-store name="Object_store" doc:name="Object store"  config-ref="ObjectStore_Config"/>
	<os:config name="ObjectStore_Config" doc:name="ObjectStore Config"  />
	<flow name="Create_Form" >
		<http:listener doc:name="HTTP"  config-ref="HTTP_Listener_config" path="/createForm" />
		<ee:transform doc:name="Transform Message">
			<ee:message >
				<ee:set-payload ><![CDATA[%dw 2.0
output application/json
---
{
	"description": "FormDemo",
	"folder":"22498",
	"name": "MarketoDemoForm_01"
}]]></ee:set-payload>
			</ee:message>
		</ee:transform>
		<marketo-rest-api:create-form doc:name="Create form" config-ref="Marketo_Rest_API_Config"/>
		<ee:transform doc:name="Object to JSON">
			<ee:message >
				<ee:set-payload ><![CDATA[%dw 2.0
output application/json
---
payload]]></ee:set-payload>
			</ee:message>
		</ee:transform>
		<os:store doc:name="Store form id" key="formId" objectStore="Object_store">
			<os:value ><![CDATA[#[payload.result[0].id]]]></os:value>
		</os:store>
		<set-variable
		value="#[payload.result[0].id]"
		doc:name="Set Variable"
		variableName="id"/>
		<set-variable
		value="#[payload.result[0].name]"
		doc:name="Set Variable"
		variableName="name" />
		<logger level="INFO" doc:name="Logger"
		message="Created form named: #[vars.name] with id: #[vars.id]" />
	</flow>
</mule>

To Connect in Design Center

  1. In Design Center, select a trigger such as the HTTP Listener or Scheduler.
  2. Select the plus sign to add a component.
  3. Select the connector as a component.
  4. Configure these fields: Mule training

Advanced Hibernate Interview Questions and Answers

What is Hibernate Framework?

Object-relational mapping or ORM is the programming technique to map application domain model objects to the relational database tables. Hibernate is java based ORM tool that provides framework for mapping application domain objects to the relational database tables and vice versa.

Hibernate provides reference implementation of Java Persistence API, that makes it a great choice as ORM tool with benefits of loose coupling. We can use Hibernate persistence API for CRUD operations.

Hibernate framework provide option to map plain old java objects to traditional database tables with the use of JPA annotations as well as XML based configuration.

Similarly hibernate configurations are flexible and can be done from XML configuration file as well as programmatically.

What is ORM?

ORM is an acronym for Object/Relational mapping. It is a programming strategy to map object with the data stored in the database. It simplifies data creation, data manipulation, and data access.

What’s the usage of Configuration Interface in hibernate?

Configuration interface of hibernate framework is used to configure hibernate. It’s also used to bootstrap hibernate. Mapping documents of hibernate are located using this interface.

Explain the advantages of Hibernate?

Some of the advantages of Hibernate are:

  • It provides Simple Querying of data.
  • An application server is not required to operate.
  • The complex associations of objects in the database can be manipulated.
  • Database access is minimized with smart fetching strategies.
  • It manages the mapping of Java classes to database tables without writing any code.
  • Properties of XML file is changed in case of any required change in the database. For more Java Online Course

What are the advantages of using Hibernate over JDBC?

Major advantages of using Hibernate over JDBC are:

  1. Hibernate eliminates a lot of boiler-plate code that comes with JDBC API, the code looks cleaner and readable.
  2. This Java framework supports inheritance, associations, and collections. These features are actually not present in JDBC.
  3. HQL (Hibernate Query Language) is more object-oriented and close to Java. But for JDBC, you need to write native SQL queries.
  4. Hibernate implicitly provides transaction management whereas, in JDBC API, you need to write code for transaction management using commit and rollback.
  5. JDBC throws SQLException that is a checked exception, so you have to write a lot of try-catch block code. Hibernate wraps JDBC exceptions and throw JDBCException or HibernateException which are the unchecked exceptions, so you don’t have to write code to handle it has built-in transaction management which helps in removing the usage of try-catch blocks.

Difference between get() vs load() method in Hibernate?

This is one of the most frequently asked Hibernate interview questions, I have seen it several times. The key difference between the get() and load() method is that load() will throw an exception if an object with id passed to them is not found, but get() will return null.

Another important difference is that load can return proxy without hitting the database unless required (when you access any attribute other than id) but get() always go to the database, so sometimes using load() can be faster than the get() method. More skills from Java Online Training

Use the load() method, if you know the object exists, and get() method if you are not sure about the object’s existence.

Name some of the databases that hibernate supports.

Hibernate supports almost all the major RDBMS. Following is list of few of the database engines supported by Hibernate.

  • HSQL Database Engine
  • DB2/NT
  • MySQL
  • PostgreSQL
  • FrontBase
  • Oracle
  • Microsoft SQL Server Database
  • Sybase SQL Server
  • Informix Dynamic Server

How to create database applications in Java with the use of Hibernate?

Hibernate makes the creation of database applications in Java simple. The steps involved are –

1) First, we have to write the java object.

2) A mapping file (XML) needs to be created that shows the relationship between class attributes and the database.

3) Lastly,  Hibernate APIs have to be deployed in order to store the persistent objects.

What is the purpose of Session.beginTransaction()?

Hibernate keeps a log of every data exchange with the help of a transaction. Thereon, in case a new exchange of date is about to get initiated, the function Session.beginTransaction is executed in order to begin the transaction.

What role does the SessionFactory interface play in Hibernate?

The application obtains Session instances from a SessionFactory. There is typically a single SessionFactory for the whole application created during application initialization. The SessionFactory caches generate SQL statements and other mapping metadata that Hibernate uses at runtime. It also holds cached data that has been read in one unit of work and may be reused in a future unit of work
SessionFactory sessionFactory = configuration.buildSessionFactory();

What is the general flow of Hibernate communication with RDBMS?

The general flow of Hibernate communication with RDBMS is :

  • Load the Hibernate configuration file and create configuration object. It will automatically load all hbm mapping files
  • Create session factory from configuration object
  • Get one session from this session factory
  • Create HQL Query
  • Execute query to get list containing Java objects

What is Hibernate Query Language (HQL)?

Hibernate offers a query language that embodies a very powerful and flexible mechanism to query, store, update, and retrieve objects from a database. This language, the Hibernate query Language (HQL), is an object-oriented extension to SQL.

To get in-depth knowledge, live free demo on Java Certification Course

Design a site like this with WordPress.com
Get started