Articles related to research conducted by Empirica in the area of algorithmic trading and software development.

Now Crypto. Lessons learned from over 10 years of developing trading software

By Michal Rozanski, CEO at Empirica.

Reading news about crypto we regularly see the big money inflow to new companies with a lot of potentially breakthrough ideas. But aside from the hype from the business side, there are sophisticated technical projects going on underneath.

And for new cryptocurrency and blockchain ideas to be successful, these projects have to end with the delivery of great software systems that scale and last. Because we have been building these kinds of systems for the financial markets for over 10 years we want to share a bit of our experience.

Read more on how Empirica delivers its trading software development services

“Software is eating the world”. I believe these words by Marc Andreessen. And now the time has come for financial markets, as technology is transforming every corner of the financial sector. Algorithmic trading, which is our speciality, is a great example. Other examples include lending, payments, personal finance, crowdfunding, consumer banking and retail investments. Every part of the finance industry is experiencing rapid changes triggered by companies that propose new services with heavy use of software.

If crypto relies on software, and there is so much money flowing into crypto projects, what should be looked for when making a trading software project for cryptocurrency markets? Our trading software development projects for the capital and crypto markets as well as building our own algorithmic trading platform has taught us a lot. Now we want to share our lessons learned from these projects.

 

  1. The process – be agile.

Agile methodology is the essence of how software projects should be made. Short iterations. Frequent deliveries. Fast and constant feedback from users. Having a working product from early iterations, gives you the best understanding of where you are now, and where you should go.

It doesn’t matter if you outsource the team or build everything in-house; if your team is local or remote. Agile methodologies like Scrum or Kanban will help you build better software, lower the overall risk of the project and will help you show the business value sooner.

 

  1. The team – hire the best.

A few words about productivity in software industry. The citation is from my favourite article by Robert Smallshire ‘Predictive Models of Development Teams and the Systems They Build’ : ‘… we know that on a small 10 000 line code base, the least productive developer will produce about 2000 lines of debugged and working code in a year, the most productive developer will produce about 29 000 lines of code in a year, and the typical (or average) developer will produce about 3200 lines of code in a year. Notice that the distribution is highly skewed toward the low productivity end, and the multiple between the typical and most productive developers corresponds to the fabled 10x programmer.’.

I don’t care what people say about lines of code as a metric of productivity. That’s only used here for illustration.

The skills of the people may not be that important when you are building relatively simple portals with some basic backend functionality. Or mobile apps. But if your business relies on sophisticated software for financial transactions processing, then the technical skills of those who build it make all the difference.

And this is the answer to the unasked question why we in Empirica are hiring only best developers.

We the tech founders tend to forget how important it is to have not only best developers but also the best specialists in the area which we want to market our product. If you are building an algo trading platform, you need quants. If you are building banking omnichannel system, you need bankers. Besides, especially in B2B world, you need someone who will speak to your customers in their language. Otherwise, your sales will suck.

And finally, unless you hire a subcontractor experienced in your industry, your developers will not understand the nuances of your area of finance.

 

  1. The product – outsource or build in-house?

If you are seriously considering building a new team in-house, please read the points about performance and quality, and ask yourself the question – ‘Can I hire people who are able to build systems on required performance and stability levels?’. And these auxiliary questions – can you hire developers who really understand multithreading? Are you able to really check their abilities, hire them, and keep them with you? If yes, then you have a chance. If not, better go outsource.

And when deciding on outsourcing – do not outsource just to any IT company hoping they will take care. Find a company that makes systems similar to what you intend to build. Similar not only from a technical side but also from a business side.

Can outsourcing be made remotely without an unnecessary threat to the project? It depends on a few variables, but yes. Firstly, the skills mentioned above are crucial; not the place where people sleep. Secondly, there are many tools to help you make remote work as smooth as local work. Slack, trello, github, daily standups on Skype. Use it. Thirdly, find a team with proven experience in remote agile projects. And finally – the product owner will be the most important position for you to cover internally.

And one remark about a hidden cost of in-house development, inseparably related to the IT industry – staff turnover costs. Depending on the source of research, turnover rates for software developers are estimated at 25% to even 38%. That means that when constructing your in-house team, every fourth or even every third developer will not be with you in a year from now. Finding a good developer – takes months. Teaching a new developer and getting up to speed – another few months. When deciding on outsourcing, you are also outsourcing the cost and stress of staff turnover.

 

  1. System’s performance.

For many crypto projects, especially those related with trading,  system’s performance is crucial. Not for all, but when it is important, it is really important. If you are building a lending portal, performance isn’t as crucial. Your customers are happy if they get a loan in a few days or weeks, so it doesn’t matter if their application is processed in 2 seconds or in 2 minutes. If you are building an algo trading operations or bitcoin payments processing service, you measure time in milliseconds at best, but maybe even in nanoseconds. And then systems performance becomes a key input to the product map.

95% of developers don’t know how to program with performance in mind, because 95% of software projects don’t require these skills. Skills of thinking where bytes of memory go, when they will be cleaned up, which structure is more efficient for this kind of operation on this type of object. Or the nightmare of IT students – multithreading. I can count on my hands as to how many people I know who truly understand this topic.

 

  1. Stability, quality and level of service.

Trading understood as an exchange of value is all about the trust. And software in crypto usually processes financial transactions in someway.

Technology may change. Access channels may change. You may not have the word ‘bank’ in your company name, but you must have its level of service. No one in the world would allow someone to play with their money. Allowing the risk of technical failure may put you out of business. You don’t want to spare on technology. In the crypto sapce there is no room for error.

You don’t achieve quality by putting 3 testers behind each developer. You achieve quality with processes of product development. And that’s what the next point is about.

 

  1. The DevOps

The core idea behind DevOps is that the team is responsible for all the processes behind the development and continuous integration of the product. And it’s clear that agile processes and good development practices need frequent integrations. Non-functional requirements (stability and performance) need a lot of testing. All of this is an extra burden, requiring frequent builds and a lot of deployments on development and test machines. On top of that there are many functional requirements that need to be fulfilled and once built, kept tested and running.

On many larger projects the team is split into developers, testers, release managers and system administrators working in separate rooms. From a process perspective this is an unnecessary overhead. The good news is that this is more the bank’s way of doing business, rarely the fintech way. This separation of roles creates an artificial border when functionalities are complete from the developers’ point of view and when they are really done – tested, integrated, released, stable, ready for production. By putting all responsibilities in the hands of the project team you can achieve similar reliability and availability, with a faster time to the market. The team also communicates better and can focus its energy on the core business, rather than administration and firefighting.

There is a lot of savings in time and cost in automation. And there are a lot of things that can be automated. Our DevOps processes have matured with our product, and now they are our most precious assets.

 

  1. The technology.

The range of technologies applied for crypto software projects can be as wide as for any other industry. What technology makes best fit for the project depends, well, on the project. Some projects are really simple such as mobile or web application without complicated backend logic behind the system. So here technology will not be a challenge. Generally speaking, crypto projects can be some of the most challenging projects in the world. Here technologies applied can be the difference between success and failure. Need to process 10K transaction per second with a mean latency under 1/10th ms. You will need a proven technology, probably need to resign from standard application servers, and write a lot of stuff from scratch, to control the latency on every level of critical path.

Mobile, web, desktop? This is more of a business decision than technical. Some say the desktop is dead. Not in trading. If you sit whole day in front of the computer and you need to refer to more than one monitor, forget the mobile or web. As for your iPhone? This can be used as an additional channel, when you go to a lunch, to briefly check if the situation is under control.

 

  1. The Culture.

After all these points up till now, you have a talented team, working as a well-oiled mechanism with agile processes, who know what to do and how to do it. Now you need to keep the spirits high through the next months or years of the project.

And it takes more than a cool office, table tennis, Xbox consoles or Friday parties to build the right culture. Culture is about shared values. Culture is about a common story. With our fintech products or services we are often going against big institutions. We are often trying to disrupt the way their business used to work. We are small and want to change the world, going to war with the big and the powerful. Doesn’t it look to you like another variation of David and Goliath story? Don’t smile, this is one of the most effective stories. It unifies people and makes them go in the same direction with the strong feeling of purpose, a mission. This is something many startups in other non fintech branches can’t offer. If you are building the 10th online grocery store in your city, what can you tell your people about the mission?

Read more on how Empirica delivers its crypto software development services

 

Final words

Crypto software projects are usually technologically challenging. But that is just a risk that needs to be properly addressed with the right people and processes or with the right outsourcing partner. You shouldn’t outsource the responsibility of taking care of your customers or finding the right market fit for your product. But technology is something you can usually outsource and even expect significant added value after finding the right technology partner.

At Empirica we have taken part in many challenging crypto projects, so learn our lessons, learn from others, learn your own and share it. This cycle of learning, doing and sharing will help the crypto community build great systems that change the rules of the game in the financial world!

 

 

Modern monitoring software – things to look for and things to avoid

Monitoring software is in the base of a company’s IT stack. Without monitoring, organizations are blind to factors that affect performance, reliability, scalability and availability of systems in. Once installed, monitoring becomes essential to an organization’s performance and embedded into business and operational workflows. There are a number of industry trends that are currently changing the way organizations manage and use, deploy software applications and their underlying technology infrastructure. These trends are creating a significant opportunity to displace existing monitoring solutions and reshape the product categories, and include:

 

Read more on Crypto Exchange Monitoring Software

 

Modern technologies create significant challenges for IT. Technologies such as containers, microservices and serverless computing produce IT environments which are highly ephemeral in character compared to static legacy on-premise environments. The amount of SaaS platforms and open source tools offered to IT organizations has exploded providing significant options to developers to use the most powerful and agile services compared to a few standardized vendor suites from the on-premise globe. The scale of computing resources required from the cloud has improved exponentially and can be called upon in rapid, sometimes unpredictable, bursts of enlarged computing capacity compared to the static nature and smaller scale demanded of heritage information centres. The rate of change of application development from the cloud has improved dramatically as programs are being updated in days and minutes compared to weeks and years. These challenges have made it extremely difficult to gain visibility and insight into program and infrastructure performance and heritage monitoring tools have struggled to adapt.

 

We are in the early days of change. A seismic change is from static IT architectures to lively multi-cloud and architectures with ephemeral technologies such as containers, microservices and serverless architectures . According to Gartner, since the cloud becomes mainstream from 2018 to 2022, it will influence portions of business IT choices, with over $1 billion in enterprise IT spend at stake in 2019. The change permits businesses to improve agility, accelerate innovation and better manage costs. As companies migrate into the cloud and their infrastructure changes, so does the monitoring of this infrastructure. We are in the early days of this huge transformation. According to Gartner, only 5% of applications have been monitored as 2018. Worldwide spend on people cloud solutions, including infrastructure-as-a-service and platform-as-a-service is anticipated to grow from $60 billion in 2018 to roughly $173 billion in 2022, according to the IDC, representing a 30% compound annual growth rate.

 

Collaboration of development and operations teams is critically important. DevOps is a practice and culture characterized by developers and IT operations teams working collectively, each with ownership of the entire product development cycle. DevOps is necessary to achieving the agility and speed required for growing and maintaining modern applications, but they’ve been historically siloed. From the inactive, on-premise world, developers and IT operations personnel functioned independently with different objectives, priorities and resources. Developers would focus on writing code to create the best applications and operations teams will cause analyzing, scaling and deploying the applications. These teams generally did not collaborate and had systems and tools to track performance. Often the lack of communication between Dev and Ops teams would result in problems in program performance because the code may not have been written with the most efficient installation in mind, resulting in difficulty climbing, latency and other performance issues. The cycle of code rewrites may be protracted, but suitable from the static world where software releases occurred once a year. In the cloud era, where the frequency of software updates is days or minutes, this communication and coordination between Dev and Ops is necessary to ensuring rapid implementation and maximizing business performance. With mission-critical procedures being powered by software, Dev and Ops teams must collaborate to optimize both technology and business functionality. As a result, Dev and Ops teams need tools that provide a unified perspective of both technology and business performance so as to collaborate in real-time to optimize business success.

 

Organizations must digitally transform their companies to compete. There has been a fundamental shift in the way organizations use technology to interact with their clients and compete in the marketplace. This rise in influence is directly connected to the increased quantities of resources organizations are devoting to building distinguished mission-critical software. Poor technology performance negatively affects results and user experience in lost earnings, customer churn brand perception and employee productivity. Thus, companies across all industries are investing to digitally alter their businesses and increase the experience of their customers. At precisely the exact same time, their investments are significantly growing to monitor this digital transformation. According to Gartner, enterprises will quadruple their usage of APM due to increasingly digitalized business processes from 2018 through 2021 to achieve 20 percent of business applications.

 

Limitations of Offerings

 

Legacy industrial and homegrown technologies have been created to operate with monolithic, static and on-premise environments. These approaches typically exhibit the following critical limitations:

 

Not built to work with a wide set of technologies. Legacy technologies aren’t meant to operate in heterogeneous environments, with a plethora of vendors, software and technologies. Instead, these offers are built to work with a limited variety of heritage, on-premise vendor suites and can’t make the most of contemporary SaaS and open source technologies the industry has lately embraced.

 

Not built for development and operations teams cooperation. Legacy offerings often force development and operations groups to use disparate monitoring technologies that don’t share a frequent frame or set of data and analytics. This makes collaboration between Dev and Ops teams hard and can often cause sub-optimal company effects.

 

Deficiency of complex analytics. Legacy on-premise architectures deficiency scalability in collecting and processing large comprehensive datasets. Users of those legacy technologies frequently must manually collect and integrate information from disparate systems and IT environments. The shortage of data scale and aggregation can make it challenging to train contemporary machine-learning algorithms causing less precise insights.

 

Not constructed for cloud scale. Legacy technologies aren’t meant for cloud scale surroundings and fast, sometimes unpredictable, bursts of computing resources required by modern software.

 

Not constructed for dynamic infrastructure. Most offerings were constructed for static infrastructures where components of the infrastructure and applications are deployed once and rarely change. These solutions cannot visualize, and monitor technologies like clouds, containers and microservices, which can be highly dynamic and ephemeral in nature.

 

There are a number of contemporary commercial technologies that have attempted to tackle the shortcomings of legacy approaches. These approaches typically exhibit the following limitations:

 

Point solutions lack depth of visibility and insight. Point solutions can’t offer integrated infrastructure monitoring, program performance monitoring and log management on a single platform and therefore, lack the required visibility, insight and context for optimal collaboration.

 

Monitoring sprawl exacerbates alert fatigue. Disparate tools frequently exacerbate the awake fatigue suffered by many organizations. Gartner notes the need for companies to trim down the amount of monitoring tools used, which in the case of bigger enterprises is more than 30, while some smaller businesses have monitoring tools ranging in number from three to 10.

 

Difficult hard to use and to set up. These technologies often have complex implementation processes requiring significant specialist services. These offerings are complicated to use, requiring extensive upfront and continuing training and time commitment.

 

These offerings are intended to tackle very specific use cases for a small cadre of users and can require heavy implementation expenses and services so as to derive value. They aren’t easily extensible to a extensive set of use cases to get a larger number of technology and business users.

 

Key Strengths of modern solutions

 

Old model of siloed developers and IT operations engineers is broken, and that legacy tools used for monitoring static on-premise architectures don’t work in modern cloud or hybrid environments. Cloud-native platform empowers development and operations teams to collaborate, fast assemble and enhance software, and drive business performance. Empowered by out-of-the box functionality and simple, self-service installation, customers can quickly deploy the stage to offer application- and infrastructure-wide visibility, often within minutes.

 

Built for lively cloud infrastructures. Our revolutionary platform was created in the cloud and has been built to work with transient cloud technologies for example microservices, containers and serverless computing. Our data model was built to operate at cloud scale with dynamic data will process more than 10 trillion events a day and collections.

 

Our system is searchable with out-of-the-box integrations, customizable dashboards, real-time visualization and prioritized alerting. The platform is set up in a setup process within seconds, enabling users to derive worth without implementation or any technical training or customization. It is extensible across a vast selection of use cases to a set of developers, operations engineers and business users. As a result, our platform used every day and is integral to company operations, and our customers find value in the solution as time passes.

 

Integrated information platform. We were the first to unite the”three pillars of observability” – metrics, traces, and logs – with the debut of our log management solution in 2018. Nowadays, our platform unites infrastructure monitoring, application performance monitoring, log management, user experience monitoring, and network performance monitoring in a single integrated data platform. This approach increases efficiency by decreasing the expense and friction of trying to glean insights from systems. We are able to provide a unified view across the IT stack, including infrastructure and application performance, as well as the real time events. Each of our products is incorporated and taken together provide the ability to see metrics, traces and logs side-by-side and perform correlation analysis.

 

Constructed for collaboration. Our platform was built to break down the silos between developers and operations teams in order to help organizations adopt DevOps practices and enhance overall business performance. We provide development and operations teams with a set of tools to develop a joint comprehension of application performance and insights to the infrastructure supporting the applications. Additionally, our customizable and dashboards can be shared with business organizations to provide them with actionable insights.

 

Cloud agnostic. Our system is designed to be deployable across all environments, including public cloud, personal cloud, on-premise and multi-cloud hybrid environments, enabling organizations to diversify their infrastructure and decrease individual vendor dependence.

 

Ubiquitous. Cloud systems are often deployed across the entire infrastructure of a customer, which makes it ubiquitous. In comparison to legacy systems that are frequently used exclusively by a few users within a business’s IT operations group, modern systems ought to be a part of their lives of developers, operations engineers and business leaders.

 

Integrates with our clients’ environments that are complex. We empower development and operations groups to harness the complete range of SaaS and open source tools. We have over 350 out-of-the-box integrations with technologies to provide substantial value to our customers without the need for professional services. Our integrations provide for detailed data point aggregation and up-to-date, high-quality customer adventures across heterogeneous IT environments.

 

Powered by machine-learning and analytics. Our system ingests amounts of information to our data warehouse that is unified. We create actionable insights using our advanced analytics capabilities. Our platform includes machine learning that can cross-correlate metrics, clips and traces to identify outliers and notify consumers of potential anomalies before they impact the company.

 

Scalable. Our SaaS platform is highly scalable and is delivered via the cloud. Our platform is massively scalable currently monitoring more than 10 trillion occasions per day and millions of containers and servers at any point in time. We offer easily accessible data retention at complete granularity for extensive intervals, which may provide clients with a comprehensive view of the historical data.

 

Key Benefits

 

Our systems provides the following key benefits to our customers:

 

Enable operational efficiency. Our solution is easy to set up, which eliminates the requirement for services that are professional and heavy implementation costs. We have over 350 out-of-the-box integrations with key technologies, from which our customers can derive value, preventing internal development costs and services necessary to create those integrations. Our customer-centric pricing model is tailored to customers’ desired usage requirements. For example, our log management solution has differentiated pricing for logs indexed versus logs ingested. Our platform enables customers to better understand the operational demands of their software and IT environments, allowing greater efficiency in resource allocation and spend on cloud infrastructure.

 

Employing APM infrastructure and log info in our system, our customers can quickly isolate the source of application issues in one place where they would be required to spend hours trying to explore using tools. Additionally, our machine learning algorithms have been trained on the amount of data that our customers send us to discover anomalies and predict failures in client systems in real time, something that’s not possible to do.

 

Improve agility of development, operations and business teams. We remove the silos of development and operations teams and provide a platform that enables agile and efficient development through the adoption of DevOps. Our platform enables development and operations teams to collaborate with a shared understanding of information and analytics. This helps them develop a joint understanding of application performance and shared insights to the infrastructure behind the applications. Additionally, for businesses, our customizable and easy-to-understand dashboards could be shared with business organizations to supply them with real time actionable insights into company performance.

 

Read more on Crypto Exchange Monitoring Software

 

Accelerate digital transformation. We empower customers to take advantage of the cloud preserve and to develop mission-critical applications with agility and with confidence in the face of business and time stress and complexity of infrastructure. Because of this, our system helps accelerate innovation cycles, and provide exceptional digital encounters and optimize business performance.

HFT – the good, the bad and the ugly

High Frequency Trading, known also as HFT, is a technology of market strategies execution. HFT is defined by technically simple and time costless algorithms that run on appropriate software optimized for data structures, level of memory usage and processor use, as well as suitable hardware, co-location and ultra low-latency data feeds.

 

Although HFT exists on the market for over 20 years, it has became one of the hottest topic during past few years. It is caused by several factors, such as May 6, 2010, “Flash crash”, latest poor situation on the market and Michael Lewis book – “Flash Boys”. Let’s look where all that fuss comes from.

 

The Bad

 

Among other things, the advantage over other market participants and ability to detect market inefficiencies is the reason why so many people critics HFT so much. Most common charges put on the table are:

 

  • Front Running – HFT companies use early access to incoming quotes to buy shares before other investors and then turn around and sell him just bought shares with slightly bigger price.
  • Quote Stuffing – Way of market manipulation by quick sending and withdrawing large number of orders. Because of speed of operations, it creates a false impression of the situation on the market that leads other participants to executing against phantom orders. Then there is nothing else to do, but to exploit favorable prices by HFT investors.
  • Spoofing – Another method for market manipulation by placing orders and then cancelling them for price increase/decrease. It is based on placing big order on the market to bait other investors, and when the market starts to react, quickly cancel it. Then new price allows to gain some profit by HFT investor.

 

But that’s just a tip of the iceberg. It can be often heard that there is lack of proper HFT regulations, exist false belief that there are Dark Pools without any regulations where HFT companies can hide their activity, and there is still active argument if HFT brings liquidity to the market or just useless volume.

 

The Ugly?

 

Bill Laswell once said “People are afraid of things they don’t understand. They don’t know how to relate. It threatens their security, their existence, their career, image.” That phrase perfectly fits to what is happening now on High Frequency Trading topic. When people would like to take a closer look on how exchanges work, probably, they would be less sceptic to High Frequency Trading.

 

Thus, on most, maybe even on all, exchanges exist two mechanism which can efficiently handle problem of quote stuffing and spoofing. First of them is limitation of number of messages per second that can be send from one client. For example on New York Stock Exchange there is a limit of 1000 messages/sec, so it means that if HFT company burst whole 1000 of messages in first half of the period, in second half it cannot send any message, so it’s cut out of the market. Other limitation used by exchanges is a limit of messages per trade. It hits even harder in quote stuffing and spoofing. In most of the cases limit is around 500 messages per trade and if someone exceed it then he should be prepared for fines. On top of it company that frequently break limits could be banned from exchange for some time.

 

If we talk about front running, first thing we have to know is a fact that front running, in the dictionary meaning, is illegal action, and there are big fines for caught market participants who use it. Front running is using informations about new orders before they will go to the order book. Let’s say Broker gets new order with price limit to process, but before putting it to exchange, he will buy all available shares at better price than limit and then he execute client’s new order at limit getting extra profit. That’s highly not allowed and that’s not what HFT companies do.

 

All they do is tracking data feed, analyzing quotes, trades, statistics and basing on that information they try to predict what is going to happen in next seconds. Of course, they have advantage due to latency on data feed and so on, because of co-location, better connection and algorithms, but it’s still fair.

Hft-scalping-for-large-orders.svg

(source: Wikipedia)

 

HFT companies have to play on the same rules as other market participants, so they don’t have any special permits letting them do things not allowed for others. Same with Dark Pools, specially that they are regularly controlled by Finance Regulators.

 

The Good

 

First, we have to know that suppliers of liquidity, i.e. Market Makers and some investors use HFT. They place orders on both sides of the book, and all the time are exposed to sudden market movement against them. The sooner such investors will be able to respond to changes in the market, the more he will be willing to place orders and will accept the narrower spreads. For market makers the greatest threat is the inability to quickly respond to the changing market situation and the fact that someone else could realize their late orders.

 

System performance in this case is a risk management tool. Investments in the infrastructure, both a software and hardware (including co-location), are able to improve their situation in terms of risk profile. The increase in speed is then long-term positive qualitative impact on the entire market, because it leads to narrowing of the spread between bids and offers – that is, reduce the transaction costs for other market participants, and increase of the liquidity of the instruments.

 

HFT AND MARKET QUALITY

 

In April of 2012. IIROC (Investment Industry Regulatory Organization of Canada), the Canadian regulatory body, has changed fee structure based so far only on the volume of transactions, adding the tariffs and fees that also take into account the number of sent messages (new orders, modifications and cancellations). In result, introducing new fees made trading in the high frequencies more difficult. It was very clearly illustrated by data from the Canadian market.

 

Directly in the following months these fees caused a decrease in the number of messages sent by market participants by 30% and hit, as you might guess, precisely the institutions that use high-frequency trading, including market makers. The consequence for the whole market was increase in the average bid-ask spread by 9%.

NO PLACE FOR MISTAKES

 

When people talk about HFT, both enthusiast and critics, it is not rare to hear that HFT is risk free. Well, on the face of it, after analyzing how HFT works you would possibly agree with it, but there is a dangerous side of HFT that can be not so obvious and people often forgot about it. HFT algorithms works great if the code is well written, but what would happen if someone would run wrong, badly tested or incompatible code on a real market?

 

We don’t have to guess it, because it happened once and it failed spectacularly, it was a “Knightmare”. Week before unfortunate 1st of August Knight Capital started to upload new version of its proprietary software to eight of their servers. However Knight’s technicians didn’t copy the new code to one of eight servers. When the market started at 9:30 AM and all 8 server was run, the horror has begun. Old incompatible code messed up with the new one and Knight Capital initiated to lose over $170,000 every second.

(source: nanex.net)

It was going for 45 minutes before someone managed to turn off the system. For this period Knight Capital lost around $460 million and became bankrupt. That was valuable lesson for all market participants that there is no place for mistakes in HFT ecosystem, because even you can gain a lot of money fast, you can lose more even faster.

 

SUMMARY

 

HFT is a natural result of the evolution of financial markets and the development of technology. Companies that invest their own money in technology in order to take advantage of market inefficiencies deserve to profit like any other market participant.

 

HFT is not as black as is painted.

 

Aldridge, Irene (2013), High-Frequency Trading: A Practical Guide to Algorithmic Strategies and Trading Systems, 2nd edition, Wiley,

 

TWAP Algorithm

Time-Weighted Average Price (TWAP) is another trading algorithm based on weighted average price and in compare to Volume-Weighted Average Price its calculations are even simplier. Also it’s one of the first execution algorithms and unlike most algorithms nowadays it’s passive execution algorithm that waits for proper market price to come, doesn’t chase it.

 

Calculations

 

As TWAP doesn’t bother about volume it’s extremely simple to obtain it. All it takes is to get Typical Price for every period bar using equation below and then calculate average of Typical Prices.

 

Read more on how we develop trading algorithms for capital and cryptocurrency markets

 

 

Typical Price = (Close+High+Low+Open)/4

 

Let’s just take a look at example results calculated on 1-minute interval intraday Morgan Stanley’s stock.

 

Time Close High Low Open Typical Price TWAP
09:30:00 38.90 38.96 38.90 38.96 38.93 38.930
09:31:00 38.94 38.97 38.86 38.92 38.92 38.926
09:32:00 38.91 38.96 38.91 38.94 38.93 38.928
09:33:00 38.89 38.94 38.88 38.92 38.91 38.922
09:34:00 38.90 38.94 38.90 38.90 38.91 38.920
09:35:00 38.97 38.97 38.90 38.90 38.93 38.922
09:36:00 38.92 38.96 38.92 38.96 38.94 38.925
09:37:00 38.90 38.93 38.86 38.93 38.91 38.922
09:38:00 38.90 38.92 38.89 38.89 38.90 38.920
09:39:00 38.92 38.92 38.88 38.91 38.91 38.918
09:40:00 38.90 38.92 38.88 38.91 38.90 38.917
09:41:00 38.84 38.89 38.82 38.89 38.86 38.912
09:42:00 38.87 38.87 38.84 38.84 38.86 38.908
09:43:00 38.85 38.89 38.84 38.89 38.87 38.905
09:44:00 38.81 38.85 38.80 38.85 38.83 38.900
09:45:00 38.69 38.80 38.67 38.80 38.74 38.890

 

Strategy

 

The most common use of TWAP is for distributing big orders throughout the trading day. For example let’s say you want to buy 100,000 shares of Morgan Stanley. Putting one such a big order would vastly impact the market and the price most likely would start to raise. To prevent that, investor can define time period in TWAP Strategy over which they want to buy shares. It will slice evenly big order into smaller ones and execute them over defined period.

 

TWAP could be used as alternative to VWAP, but because of itssimplicity we have to remember about some pitfalls. Even if we slice big orders, we do it evenly, thus there is a possibility to hit on low liquidity period when our splitted order will impact the market hard. That’s why it’s recommended to use TWAP over short periods or on stocks that are believed to not have any volume profile to follow.

 

Be random

 

There is also another threat coming directly from dividing big order evenly, namely, other traders or predatory algorithms. Obviously trading in such a predictable way can lead to situation where other traders or algorithms would look through our strategy and start to “game” us.

 

Barry Johnson in his book suggests adding some randomness to the strategy as a solution to the issue. He says that “We can use the linear nature of the target completion profile to adopt a more flexible trading approach. At any given time, we can determine the target quantity the order should have achieve just by looking up the corresponding value on the completion rate chart.”

 

In practice it means that when we have run 4-hour TWAP we don’t slice the order into evenly parts, but otherwise we focus on percentage completion. So for instance we would want to have 25% of the strategy completed by first hour, 50% by second and 75% by third. That gives a more freedom into size of orders, so we can be more random with it and hence less predictable for other traders on the market.

 

TWAP vs VWAP

 

As both indicators use same mechanism, i.e. weighted average price, it’s common to compare them. Despite that VWAP’s nature is more complex and includes volume in its calculations, on  instruments with low turnover TWAP and VWAP values can be close. On the other hand when a session starts to be more volatile both indicators will diverge.

 

 

On a table below there are TWAP and VWAP calculated for whole trading day. As we can see at the beginning of the trading day the difference is less than a cent, but on close the difference raised up to 2 cents. It happened because during the day there were some small volume trades for lower price that didn’t affected VWAP, but did TWAP.

 

Time Close High Low Open TWAP VWAP
09:44:00 38.81 38.85 38.80 38.85 38.900 38.904
09:45:00 38.69 38.80 38.67 38.80 38.890 38.887
15:57:00 38.70 38.70 38.68 38.69 38.666 38.686
15:58:00 38.71 38.72 38.68 38.70 38.666 38.686

 

Summary

 

TWAP Strategy is another great tool for executing big orders without impacting the market too hard. Like everything it has its own pros and cons and it’s up to us to select if TWAP will be the best strategy to use for our case or maybe we should consider using VWAP or other strategy.

 

 

Read more on how we develop trading algorithms for capital and cryptocurrency markets

 

References

  1. H. Kent Baker, Greg Filbeck. “Portfolio Theory of Management” (2013) , pp.421
  2. Barry Johnson “Algorithmic & Trading DMA – An introduction to direct access trading strategies” (2010), pp. 123-126