xTEN Coronavirus Precautions

Following the initial outbreak of Coronavirus, the xTEN team agreed to self-isolate should we show any symptoms and work from home should traveling to the office be restricted. Since then, the UK government has issued details of what we should expect as the virus spreads with the phased response of Contain, Delay, Research, and Mitigate.

Twelve people have tested positive in Yorkshire, and over half of these cases have been in Leeds. As it’s increasingly likely that the country will move to the Delay phase, we feel that we’re in a position to do our part to delay the outbreak and reduce the strain on the NHS, so those that are vulnerable get the care they need.

Therefore, any team members concerned about contracting the virus will be working from home this week, and everyone will be working from home until further notice starting Monday 16th March.

The team routinely work from home and have done so effectively since 2017. Working from home does not impact our ability to provide the level of service our customers expect, and this decision will not affect any services or projects.

Connecting to SQL Server with MuleSoft AnyPoint 3.9

Updated 25 Jan 2019

I normally blog about SQL Server, but I thought I’d share my experience of using SQL Server from MuleESB. If you haven’t used Mule before, it’s an open source ESB, which also has a paid for Enterprise version. I’ve been using it for almost four years now and have been really impressed with how quickly we can create quite complex applications. It’s very fast, easy to use and free if you use the community version; I also have some experience with Java which helps. I tend to use Mule for applications that process web requests or when integrating with existing systems and SQL Server Integration Services (SSIS) for moving data around.

I’ve been working on a MuleESB program which needs to log millions of requests a day, and with the recent advancements in SQL Server, it’s now the obvious choice for storing almost any type of data. You can also run SQL Server 2017 on Linux, which will please a lot of Mule users.

Mule previously had a MSSQL connector but that has been deprecated and replaced with the Generic Database Connector, which works with most JDBC drivers. To get started, we’ll need to download the latest driver from Microsoft and add it to our project. We can then configure the database using a standard connection string.

Download and add the JDBC driver

At the time of writing, the latest is JDBC 6.4; which you’ll need if you want to use SQL Server 2017 but my screenshots all reference JDBC 6.2.

Download Microsoft JDBC Driver for SQL Server

Add to your Mule Project using Add External JARs…. and selecting either mssql-jdbc-6.2.2.jre7.jar or mssql-jdbc-6.2.2.jre8.jar; depending on your JRE.

Note: You’ll need JDBC 6.4 if you want a driver for JRE 9; you’ll also need Mule 4 as 3.9 doesn’t support Java 9.


Configuring the Database Connector

The database connector is surprisingly easy to configure. We simply add a standard connection string and reference the JDBC driver, which is the same regardless of which JDBC driver you use. Aireforge have created an online connection string builder which you may find useful here.

Connection String Builder for JDBC


Here’s an example which connects to a local instance which contains a database called TestDatabase. I’ve added the port but this isn’t required, unless it was changed from the default of 1433.


The above is fine for testing but I would suggest that you use parameters that are stored in a central configuration file to make your life easier. This will enable you to automatically set the environment settings using a local config file or manually change between Dev and Production by commenting each section out. I would also advocate using an application name, as this will help you or the database team to identify your application; rather than it simply showing as a JDBC program.


Once this is set, you can store the following parameters in another mule config file. I tend to use global_properties.xml

 <!-- Test Database Settings -->
 <global-property name="mssql.server" value="" doc:name="SQL Server Instance" />
 <global-property name="mssql.port" value="1433" doc:name="SQL Server Port" />
 <global-property name="mssql.database" value="TestDatabase" doc:name="Database Name" />
 <global-property name="mssql.user" value="TestUser" doc:name="Username" />
 <global-property name="mssql.password" value="reallySecure!" doc:name="Password" />
<!-- Production Database Settings
<global-property name="mssql.server" value="proddb1.xten.uk" doc:name="SQL Server Instance" />
<global-property name="mssql.port" value="1433" doc:name="SQL Server Port" />
<global-property name="mssql.database" value="LiveDB" doc:name="Database Name" />
<global-property name="mssql.user" value="MuleApp" doc:name="Username" />
<global-property name="mssql.password" value="superSecure!" doc:name="Password" />

Other Considerations

  • By default, the maximum database connections is 5, unlike .NET where it’s 100. If you think you’ll need more, then make sure you set it higher.
  • If you’ll be using the connection frequently; I would strongly recommend using a minimum connection pool size to lower the overhead on both Mule and SQL Server.
  • If you’re using the standard port, then you don’t need to specify it but it doesn’t hurt and future proofs your application.
  • Only bother with a re-connection strategy if you really need to the data. If it’s just logging, then consider throwing it away rather than overloading a server which may already be suffering from being overloaded.
  • If you’re load balancing the connections, I would also suggest setting the Load Balance Timeout, otherwise the connections will be sticky and won’t automatically balance after a server restart etc.
  • Again, I would set an an application name to help when diagnosing issues.
  • Consider enabling delayed durability when saving logging data as it will reduce the time in SQL Server and free up database connections quicker. I would also specify for the specific stored procedure in use rather than forcing it for the entire database. With Delayed Durability comes the possibility of data loss, so make sure you understand the risks.
  • Consider using memory optimized tables for very busy applications, then process the data using a separate process. This could be called from Mule or via the SQL Server Agent. Again, if the data isn’t important, I would opt for a SCHEMA_ONLY table; which is faster but you’ll lose the data should SQL Server restart. If this isn’t an option, you can use the default of SCHEMA_AND_DATA which will commit the data to disk.
  • Use a stored procedure or parameterised queries when communicating with SQL Server as this will enable SQL Server to reuse execution plans.
  • If you decide to use memory-optimized tables, consider a natively compiled stored procedure which will reduce the overhead of the queries and improve the performance.

Quick Test

I’ve created a HTTP service listening on 8081 that will then connect to the database, retrieve the server and service name, then return it as JSON.


Here’s the example results in JSON.


Final thoughts

Microsoft have made the cheaper (or free) versions of SQL Server very powerful by allowing us to use features which were previously enterprise only. For me, it does everything I need from a database. In-memory tables that can handles thousands of updates a second, to large clustered columnstore tables that can handle real-time analytical queries that drive dashboards.

It would be nice to see the JTDS driver get more attention, as it’s much faster than ODBC but at the time of writing, doesn’t work with the latest versions of SQL Server and doesn’t have much documentation / support. Maybe Microsoft could release and maintain JTDS driver in parallel to JDBC?

Utilizing the cache scope could also prove quite powerful as it will reduce the resource requirements of SQL Server, whilst improving the performance of your Mule application. I’ll try to blog on this soon but feel free to pester me if I haven’t.

Further reading

Using the JDBC driver | Microsoft

Building the connection URL | Microsoft

Database Connector | MuleSoft

Cache Scope | MuleSoft

Defining Durability for Memory-Optimized Objects | Microsoft

Control Transaction Durability | Microsoft

Natively Compiled Store Procedures | Microsoft

xTEN become a Redgate Certified Partner

We’re pleased to announce that xTEN are a Redgate Certified Partner and one of a handful of organisations that can offer SQL Monitor licenses on a monthly basis. We offer this as part of our managed services but also to customers who may wish to scale up their monitoring for seasonal traffic or for short term projects; such as consolidating servers and services.

PNG_Redgate Partner logo_RGB_WHITE

Who are Redgate?

Redgate are the leading provider of software for professionals working on the Microsoft data platform. They help over 800,000 people in every type of organisation around the world, from small and medium sized businesses to 91% of companies in the Fortune 100. Their SQL Toolbelt helps users implement DevOps best practice for their databases, easily monitor database performance, and improve team productivity.

Improving search times for Low Cost Travel Group

Low Cost Travel Group, an online travel agent, needed their infrastructure to work harder and smarter. xTEN used their expertise to:

  • Reduce average search times from over 4 seconds to 0.8.
  • Increase search capability by 40%.
  • Improve uptime and system availability.
  • Simplify infrastructure.
  • Improve system monitoring.

The result: More sales and happier customers. Efficient systems that were easier to maintain, without the costs of new infrastructure.  

Reducing the impact of downtime with Audley Travel

Audley Travel is a Tour Operator providing high quality, tailor-made holidays. As the company was expanding quickly with multiple offices around the world, their aim was to improve resilience and improve their procedures in the event of unexpected downtime.

xTEN worked closely with Audley’s development and operations teams to design a solution that was best for them. Each office was given a local resilient cluster which can operate offline for days. This required some creative application changes and specialist knowledge of SQL Server. As well as this, we performed a SQL Server upgrade to utilise newer features and ensure version support.

The result: better protection against downtime. Thanks to the design, if one of Audley’s many office locations found their systems isolated, they could continue to work. This reduced the financial impact of time offline.