Wednesday, 28 March 2018

Arquitetura do sistema de negociação de baixa latência


A página não pode ser encontrada.


Por favor, tente o seguinte:


Verifique se o endereço do site exibido na barra de endereços do seu navegador está escrito e formatado corretamente. Se você chegou a esta página clicando em um link, entre em contato com o administrador do site para alertá-los de que o link está formatado incorretamente. Clique no botão Voltar para tentar outro link.


HTTP Error 404 - Arquivo ou diretório não encontrado.


Serviços de Informações da Internet (IIS)


Informações técnicas (para pessoal de suporte)


Acesse os Serviços de suporte técnico da Microsoft e execute uma pesquisa de título para as palavras HTTP e 404. Abra a Ajuda do IIS, acessível no Gerenciador do IIS (inetmgr) e procure tópicos intitulados Configuração do site, tarefas administrativas comuns e Sobre mensagens de erro personalizadas.


Arquitetura do piso comercial.


Idiomas disponíveis.


Opções de download.


Veja com o Adobe Reader em uma variedade de dispositivos.


Índice.


Arquitetura do piso comercial.


Visão geral executiva.


O aumento da concorrência, o maior volume de dados do mercado e as novas exigências regulatórias são algumas das forças motrizes que deram origem às mudanças na indústria. As empresas estão tentando manter sua vantagem competitiva mudando constantemente suas estratégias de negociação e aumentando a velocidade de negociação.


Uma arquitetura viável deve incluir as tecnologias mais recentes dos domínios de rede e de aplicativos. Tem que ser modular para fornecer um caminho gerenciável para evoluir cada componente com uma interrupção mínima no sistema geral. Portanto, a arquitetura proposta por este artigo é baseada em uma estrutura de serviços. Examinamos serviços como mensagens de latência ultra-baixa, monitoramento de latência, multicast, computação, armazenamento, virtualização de dados e aplicativos, resiliência comercial, mobilidade comercial e thin client.


A solução para os requisitos complexos da plataforma de negociação da próxima geração deve ser construída com uma mentalidade holística, cruzando os limites dos silos tradicionais, como negócios e tecnologia ou aplicativos e redes.


O objetivo principal deste documento é fornecer diretrizes para a construção de uma plataforma de negociação de latência ultra baixa, ao mesmo tempo em que otimizamos o débito bruto e a taxa de mensagens tanto para os dados de mercado como para os pedidos de negociação FIX.


Para conseguir isso, estamos propondo as seguintes tecnologias de redução de latência:


• Conexão de alta velocidade interconectada ou InfiniBand ou 10 Gbps para o cluster de negociação.


• Autocarro de mensagens de alta velocidade.


• Aceleração de aplicativos via RDMA sem reconexão de aplicativo.


• Monitoramento de latência em tempo real e re-direção do tráfego comercial para o caminho com menor latência.


Tendências e desafios da indústria.


As arquiteturas de negociação de próxima geração precisam responder ao aumento das demandas de velocidade, volume e eficiência. Por exemplo, espera-se que o volume de dados de mercado de opções seja duplicado após a introdução das opções de negociação de penny em 2007. Existem também exigências regulatórias para a melhor execução, que exigem o manuseio de atualizações de preços a taxas que se aproximam de 1M msg / seg. para trocas. Eles também exigem visibilidade sobre o frescor dos dados e prova de que o cliente obteve a melhor execução possível.


No curto prazo, a velocidade de negociação e inovação são diferenciadores-chave. Um número crescente de negociações é tratada por aplicativos de negociação algorítmica colocados o mais próximo possível do local de execução comercial. Um desafio com estas "caixa preta" Os motores comerciais são que eles compõem o aumento de volume ao emitir ordens apenas para cancelá-los e enviá-los novamente. A causa desse comportamento é a falta de visibilidade em que local oferece melhor execução. O comerciante humano é agora um "engenheiro financeiro", & quot; um "quot" (analista quantitativo) com habilidades de programação, que pode ajustar modelos de negociação sobre a marcha. As empresas desenvolvem novos instrumentos financeiros, como derivados do tempo ou transações de classe de ativos cruzados, e precisam implementar os novos aplicativos de forma rápida e escalável.


A longo prazo, a diferenciação competitiva deve ser feita a partir da análise, não apenas do conhecimento. Os comerciantes de estrelas de amanhã assumem riscos, conseguem uma verdadeira visão do cliente e sempre vencem o mercado (fonte IBM: www-935.ibm/services/us/imc/pdf/ge510-6270-trader. pdf).


A resiliência empresarial tem sido uma das principais preocupações das empresas comerciais desde 11 de setembro de 2001. As soluções nesta área variam de centros de dados redundantes situados em diferentes regiões geográficas e conectados a vários locais de negociação para soluções de comerciantes virtuais que oferecem aos comerciantes de energia a maior parte da funcionalidade de um piso comercial em um local remoto.


O setor de serviços financeiros é um dos mais exigentes em termos de requisitos de TI. A indústria está experimentando uma mudança arquitetônica para Arquitetura Orientada a Serviços (SOA), serviços da Web e virtualização de recursos de TI. A SOA aproveita o aumento da velocidade da rede para permitir a ligação dinâmica e a virtualização de componentes de software. Isso permite a criação de novas aplicações sem perder o investimento em sistemas e infra-estrutura existentes. O conceito tem o potencial de revolucionar a forma como a integração é feita, permitindo reduções significativas na complexidade e custo dessa integração (gigaspaces / download / MerrilLynchGigaSpacesWP. pdf).


Outra tendência é a consolidação de servidores em fazendas de servidores de centros de dados, enquanto as lojas de comerciantes possuem apenas extensões KVM e clientes ultrafinos (por exemplo, soluções de lâminas SunRay e HP). As redes de área metropolitana de alta velocidade permitem que os dados de mercado sejam multicast entre diferentes locais, possibilitando a virtualização do piso comercial.


Arquitetura de alto nível.


A Figura 1 representa a arquitetura de alto nível de um ambiente comercial. A planta ticker e os mecanismos de negociação algorítmica estão localizados no cluster de negócios de alto desempenho no centro de dados da empresa ou na troca. Os comerciantes humanos estão localizados na área de aplicativos do usuário final.


Funcionalmente, existem dois componentes de aplicativos no ambiente comercial da empresa, editores e assinantes. O ônibus de mensagens fornece o caminho de comunicação entre editores e assinantes.


Existem dois tipos de tráfego específicos para um ambiente comercial:


• Dados de mercado - Realiza informações de preços para instrumentos financeiros, notícias e outras informações de valor agregado, como a análise. É unidirecional e muito sensível à latência, tipicamente entregue ao multicast UDP. É medido em atualizações / seg. e em Mbps. Os fluxos de dados do mercado de um ou vários feeds externos, provenientes de provedores de dados de mercado, como bolsas de valores, agregadores de dados e ECNs. Cada provedor tem seu próprio formato de dados de mercado. Os dados são recebidos por manipuladores de alimentação, aplicativos especializados que normalizam e limpam os dados e enviam-no aos consumidores de dados, como motores de preços, aplicativos de negociação algorítmica ou comerciantes humanos. As empresas que vendem também enviam os dados de mercado para seus clientes, empresas de compra como fundos de investimento, hedge funds e outros gerentes de ativos. Algumas empresas compradoras podem optar por receber feeds diretos dos intercâmbios, reduzindo a latência.


Figura 1 Arquitetura de negociação para uma empresa Side Side / Sell Side.


Não existe um padrão industrial para formatos de dados de mercado. Cada troca tem seu formato proprietário. Os provedores de conteúdo financeiro, como Reuters e Bloomberg, agregam diferentes fontes de dados de mercado, normalizam e adicionam notícias ou análises. Exemplos de feeds consolidados são RDF (Reuters Data Feed), RWF (Reuters Wire Format) e Bloomberg Professional Services Data.


Para entregar dados de mercado de baixa latência, ambos os fornecedores lançaram feeds de dados de mercado em tempo real que são menos processados ​​e têm menos análises:


- Bloomberg B-Pipe-Com B-Pipe, a Bloomberg desloca o feed de dados do mercado de sua plataforma de distribuição porque um terminal Bloomberg não é necessário para obter B-Pipe. Wombat e Reuters Feed Handlers anunciaram apoio para a B-Pipe.


Uma empresa pode decidir receber feeds diretamente de uma troca para reduzir a latência. Os ganhos na velocidade de transmissão podem variar entre 150 milissegundos e 500 milissegundos. Esses feeds são mais complexos e mais caros e a empresa tem que construir e manter sua própria planta de ticker (financetech / featured / showArticle. jhtml? ArticleID = 60404306).


• Negociação de encomendas: este tipo de tráfego carrega os negócios reais. É bidirecional e muito sensível à latência. É medido em mensagens / seg. e Mbps. Os pedidos originam-se de uma empresa compradora ou comercial e são enviados para locais de negociação como um Exchange ou ECN para execução. O formato mais comum para o transporte de pedidos é FIX (Financial Information eXchange-fixprotocol /). As aplicações que manipulam mensagens FIX são chamadas de motores FIX e eles se interagem com sistemas de gerenciamento de pedidos (OMS).


Uma otimização para FIX é denominada FAST (Fix Adapted for Streaming), que usa um esquema de compressão para reduzir o comprimento da mensagem e, de fato, reduzir a latência. FAST é direcionado mais para a entrega de dados de mercado e tem potencial para se tornar um padrão. FAST também pode ser usado como um esquema de compressão para formatos de dados de mercado proprietários.


Para reduzir a latência, as empresas podem optar por estabelecer acesso direto ao mercado (DMA).


O DMA é o processo automatizado de rotear uma ordem de valores mobiliários diretamente para um local de execução, evitando assim a intervenção de um terceiro (towergroup / research / content / glossary. jsp? Page = 1 e glossaryId = 383). DMA requer uma conexão direta com o local de execução.


O barramento de mensagens é um software de middleware de fornecedores como Tibco, 29West, Reuters RMDS, ou uma plataforma de código aberto como a AMQP. O barramento de mensagens usa um mecanismo confiável para entregar mensagens. O transporte pode ser feito através de TCP / IP (TibcoEMS, 29West, RMDS e AMQP) ou UDP / multicast (TibcoRV, 29West e RMDS). Um conceito importante na distribuição de mensagens é o "fluxo de tópicos", "quot; que é um subconjunto de dados de mercado definidos por critérios como símbolo de ticker, indústria ou uma certa cesta de instrumentos financeiros. Os assinantes se juntam a grupos de tópicos mapeados para um ou vários sub-tópicos para receber apenas as informações relevantes. No passado, todos os comerciantes receberam todos os dados do mercado. Nos atuais volumes de tráfego, isso seria sub-ótimo.


A rede desempenha um papel crítico no ambiente comercial. Os dados de mercado são levados ao balcão onde os comerciantes humanos estão localizados através de uma rede de alta velocidade Campus ou Metro Area. Alta disponibilidade e baixa latência, bem como alto rendimento, são as métricas mais importantes.


O ambiente de negociação de alto desempenho tem a maioria de seus componentes no farm de servidores do Data Center. Para minimizar a latência, os mecanismos de negociação algorítmica precisam estar localizados na proximidade dos manipuladores de alimentação, dos motores FIX e dos sistemas de gerenciamento de pedidos. Um modelo de implantação alternativo possui os sistemas de negociação algorítmica localizados em uma troca ou um provedor de serviços com conectividade rápida para trocas múltiplas.


Modelos de implantação.


Existem dois modelos de implantação para uma plataforma de negociação de alto desempenho. As empresas podem optar por ter uma mistura dos dois:


• Centro de dados da empresa comercial (Figura 2) - Este é o modelo tradicional, onde uma plataforma de negociação de pleno direito é desenvolvida e mantida pela empresa com links de comunicação para todos os locais de negociação. A latência varia com a velocidade dos links e o número de lúpulos entre a empresa e os locais.


Figura 2 Modelo de implantação tradicional.


• Co-localização no local de negociação (trocas, provedores de serviços financeiros (FSP)) (Figura 3)


A empresa comercial implementa sua plataforma de negociação automatizada o mais próximo possível dos locais de execução para minimizar a latência.


Figura 3 Modelo de implantação hospedado.


Arquitetura de negociação orientada para serviços.


Estamos propondo uma estrutura orientada a serviços para a construção da arquitetura de negociação da próxima geração. Esta abordagem fornece uma estrutura conceitual e um caminho de implementação baseado em modularização e minimização de interdependências.


Este quadro fornece às empresas uma metodologia para:


• Avalie seu estado atual em termos de serviços.


• Priorizar os serviços com base no seu valor para o negócio.


• Evolua a plataforma de negociação para o estado desejado usando uma abordagem modular.


A arquitetura de negociação de alto desempenho depende dos seguintes serviços, conforme definido pelo quadro de arquitetura de serviços representado na Figura 4.


Figura 4 Estrutura de Arquitetura de Serviços para Negociação de Alto Desempenho.


Tabela 1 Descrições e tecnologias de serviços.


Mensagens de latência ultra baixa.


Instrumento-aparelhos, agentes de software e módulos de roteador.


SO e virtualização de E / S, Remote Direct Memory Access (RDMA), TCP Offload Engines (TOE)


Middleware que paraleliza o processamento de aplicativos.


Middleware que acelera o acesso a dados para aplicativos, por exemplo, armazenamento em cache na memória.


Replicação multicast assistida por hardware através da rede; Optimizações multicast Layer 2 e Layer 3.


Virtualização de hardware de armazenamento (VSANs), replicação de dados, backup remoto e virtualização de arquivos.


Resiliência comercial e mobilidade.


Rede local e local de balanceamento e redes de campus de alta disponibilidade.


Serviços de aplicativos de área ampla.


Aceleração de aplicações através de uma conexão WAN para comerciantes que residem fora do campus.


Serviço de cliente fino.


Desacoplamento dos recursos de computação dos terminais enfrentados pelo usuário final.


Serviço de Mensagens de Latência Ultra-Baixa.


Esse serviço é fornecido pelo barramento de mensagens, que é um sistema de software que resolva o problema de conectar muitas aplicações. O sistema consiste em:


• Um conjunto de esquemas de mensagens pré-definidos.


• Um conjunto de mensagens de comando comuns.


• Uma infra-estrutura de aplicativos compartilhados para enviar as mensagens aos destinatários. A infra-estrutura compartilhada pode ser baseada em um corretor de mensagens ou em um modelo de publicação / assinatura.


Os principais requisitos para o barramento de mensagens de próxima geração são (fonte 29West):


• menor latência possível (por exemplo, menos de 100 microssegundos)


• Estabilidade sob carga pesada (por exemplo, mais de 1,4 milhões de msg / seg.)


• Controle e flexibilidade (controle de taxa e transportes configuráveis)


Há esforços na indústria para padronizar o ônibus de mensagens. O Advanced Message Queuing Protocol (AMQP) é um exemplo de um padrão aberto defendido por J. P. Morgan Chase e apoiado por um grupo de fornecedores, tais como Cisco, Envoy Technologies, Red Hat, TWIST Process Innovations, Iona, 29West e iMatix. Dois dos principais objetivos são fornecer um caminho mais simples para a interoperabilidade para aplicações escritas em diferentes plataformas e modularidade para que o middleware possa ser facilmente desenvolvido.


Em termos muito gerais, um servidor AMQP é análogo a um servidor de E-mail com cada troca atuando como um agente de transferência de mensagens e cada fila de mensagens como caixa de correio. As ligações definem as tabelas de roteamento em cada agente de transferência. Os editores enviam mensagens para agentes de transferência individuais, que então roteiam as mensagens para as caixas de correio. Os consumidores tomam mensagens de caixas de correio, o que cria um modelo poderoso e flexível que é simples (fonte: amqp / tikiwiki / tiki-index. php? Page = OpenApproach # Why_AMQP_).


Serviço de Monitoramento de Latência.


Os principais requisitos para este serviço são:


• Granularidade sub-milissegundo de medidas.


• Visibilidade em tempo real sem adicionar latência ao tráfego comercial.


• Capacidade de diferenciar a latência do processamento de aplicativos da latência de trânsito da rede.


• Capacidade de lidar com altas taxas de mensagens.


• Fornecer uma interface programática para aplicativos de negociação para receber dados de latência, permitindo que os mecanismos de negociação algorítmica se adaptem às condições de mudança.


• Correlação de eventos de rede com eventos de aplicativos para fins de solução de problemas.


A latência pode ser definida como o intervalo de tempo entre quando uma ordem comercial é enviada e quando a mesma ordem é reconhecida e agendada pela parte receptora.


Abordar o problema de latência é um problema complexo, que requer uma abordagem holística que identifique todas as fontes de latência e aplique diferentes tecnologias em diferentes camadas do sistema.


A Figura 5 mostra a variedade de componentes que podem introduzir latência em cada camada da pilha OSI. Ele também mapeia cada fonte de latência com uma possível solução e uma solução de monitoramento. Esta abordagem em camadas pode dar às empresas uma forma mais estruturada de atacar a questão da latência, pelo qual cada componente pode ser considerado como um serviço e tratado de forma consistente em toda a empresa.


Manter uma medida precisa do estado dinâmico desse intervalo de tempo em rotas alternativas e destinos pode ser de grande ajuda nas decisões táticas de negociação. A capacidade de identificar a localização exata dos atrasos, seja na rede de ponta do cliente, no centro central de processamento ou no nível de aplicação da transação, determina a capacidade dos provedores de serviços de atender aos seus acordos de nível de serviço comercial (SLA). Para os formulários do buy-side e do sell-side, bem como para os comerciantes de dados de mercado, a rápida identificação e remoção de estrangulamentos se traduz diretamente em oportunidades de comércio e receita aprimoradas.


Figura 5 Arquitetura de gerenciamento de latência.


Ferramentas de monitoramento de baixa latência da Cisco.


As ferramentas tradicionais de monitoramento de rede operam com minutos ou segundos de granularidade. As plataformas de negociação de última geração, especialmente as que oferecem suporte à negociação algorítmica, requerem latências inferiores a 5 ms e níveis extremamente baixos de perda de pacotes. Em uma LAN Gigabit, uma microburst de 100 ms pode causar perda de 10.000 transtornos ou atraso excessivo.


A Cisco oferece aos seus clientes uma escolha de ferramentas para medir a latência em um ambiente comercial:


• Gerenciador de qualidade de banda (BQM) (OEM da Corvil)


• Solução de Monitoramento de Latência de Serviços Financeiros (FSMS) baseada em Cisco AON


Gerenciador de Qualidade de Banda Larga.


O Gerenciador de qualidade de banda (BQM) 4.0 é um produto de gerenciamento de desempenho de aplicativos de rede de próxima geração que permite aos clientes monitorar e provisionar sua rede para níveis controlados de latência e desempenho de perda. Embora a BQM não seja exclusivamente alvo de redes comerciais, sua visibilidade por microsecondes combinada com características de provisionamento de banda larga inteligentes o torna ideal para esses ambientes exigentes.


O Cisco BQM 4.0 implementa um amplo conjunto de tecnologias de análise de tráfego e de medição de tráfego patenteadas e pendentes de patente que proporcionam ao usuário uma visibilidade e uma compreensão sem precedentes de como otimizar a rede para obter o máximo desempenho de aplicativos.


O Cisco BQM agora é suportado na família de produtos do Cisco Application Deployment Engine (ADE). A família de produtos Cisco ADE é a plataforma de escolha para aplicativos de gerenciamento de rede da Cisco.


BQM Benefícios.


A micro visibilidade da Cisco BQM é a capacidade de detectar, medir e analisar eventos de transição de latência, jitter e perda induzindo níveis microscópios de granularidade por resolução de pacotes. Isso permite que o Cisco BQM detecte e determine o impacto dos eventos de trânsito na latência, jitter e perda da rede. Critical para ambientes de negociação é que o BQM pode suportar medições de latência, perda e jitter de um jeito para o tráfego TCP e UDP (multicast). Isso significa que ele é transparente, tanto para o tráfego comercial quanto para os feeds de dados de mercado.


O BQM permite ao usuário especificar um conjunto abrangente de limiares (contra atividade de microburst, latência, perda, jitter, utilização, etc.) em todas as interfaces. BQM, em seguida, opera uma captura de pacote de rolagem de fundo. Sempre que ocorre uma violação de limite ou outro evento potencial de degradação de desempenho, ele desencadeia o Cisco BQM para armazenar a captura de pacotes no disco para análise posterior. Isso permite ao usuário examinar detalhadamente o tráfego de aplicativos que foi afetado pela degradação do desempenho ("as vítimas") e o tráfego que causou a degradação do desempenho (& quot; the culpits & quot;). Isso pode reduzir significativamente o tempo gasto no diagnóstico e na resolução de problemas de desempenho da rede.


O BQM também é capaz de fornecer recomendações detalhadas de aprovisionamento de políticas de largura de banda e qualidade de serviço (QoS), que o usuário pode aplicar diretamente para alcançar o desempenho da rede desejado.


Medidas BQM ilustradas.


Para entender a diferença entre algumas das técnicas de medição mais convencionais e a visibilidade fornecida pela BQM, podemos observar alguns gráficos de comparação. No primeiro conjunto de gráficos (Figura 6 e Figura 7), vemos a diferença entre a latência medida pelo Monitor de Qualidade de Rede Passiva (PNQM) da BQM e a latência medida pela injeção de pacotes de ping a cada 1 segundo no fluxo de tráfego.


Na Figura 6, vemos a latência relatada por pacotes de ping ICMP de 1 segundo para tráfego de rede real (é dividido por 2 para dar uma estimativa para o atraso de sentido único). Ele mostra o atraso confortavelmente abaixo de cerca de 5 ms por quase todo o tempo.


Figura 6 Latência relatada por pacotes de ping ICMP de 1 segundo para tráfego de rede real.


Na Figura 7, vemos a latência relatada pelo PNQM pelo mesmo tráfego ao mesmo tempo. Aqui, vemos que ao medir a latência unidirecional dos pacotes de aplicativos reais, obtemos uma imagem radicalmente diferente. Aqui, a latência está pairando em torno de 20 ms, com rajadas ocasionais muito maiores. A explicação é que, como o ping está enviando pacotes apenas a cada segundo, está perdendo a maior parte da latência do tráfego do aplicativo. Na verdade, os resultados do ping normalmente apenas indicam atraso de propagação de ida e volta em vez de latência de aplicação realista em toda a rede.


Figura 7 Latência relatada pelo PNQM para tráfego de rede real.


No segundo exemplo (Figura 8), vemos a diferença nos níveis de carga ou saturação de link relatados entre uma visão média de 5 minutos e uma visão de microburst de 5 ms (o BQM pode informar sobre microbursts para uma precisão de 10 a 100 nanosegundos). A linha verde mostra a utilização média em médias de 5 minutos para ser baixa, talvez até 5 Mbits / s. A trama azul escuro mostra a atividade de microburst de 5ms que atinge entre 75 Mbits / s e 100 Mbits / s, a velocidade da LAN efetivamente. O BQM mostra esse nível de granularidade para todas as aplicações e também fornece regras de provisionamento claras para permitir ao usuário controlar ou neutralizar essas microbursas.


Figura 8 Diferença na carga de ligação relatada entre uma visão média de 5 minutos e uma visão de microburst de 5 ms.


Implementação do BQM na Rede de Negociação.


A Figura 9 mostra uma implantação BQM típica em uma rede comercial.


Figura 9 Implementação típica do BQM em uma rede comercial.


BQM pode então ser usado para responder a esses tipos de perguntas:


• Algum dos meus links de núcleo Gigabit LAN está saturado por mais de X milissegundos? Isso está causando perda? Quais links mais se beneficiarão de uma atualização para Etherchannel ou 10 Gigabit?


• O tráfego do aplicativo está causando a saturação dos meus links de 1 Gigabit?


• Algum dos dados do mercado está experimentando perda de ponta a ponta?


• Quanta latência adicional o centro de dados de failover tem? Este link está dimensionado corretamente para lidar com microbursts?


• Os meus comerciantes recebem baixas atualizações de latência da camada de distribuição de dados de mercado? Eles estão vendo atrasos maiores que X milésimos de segundo?


Ser capaz de responder a essas perguntas de forma simples e efetiva economiza tempo e dinheiro na execução da rede comercial.


O BQM é uma ferramenta essencial para obter visibilidade nos dados de mercado e nos ambientes de negociação. Ele fornece medições de latência granulométricas de ponta a ponta em infraestruturas complexas que experimentam movimentos de dados de alto volume. Detectar de forma eficaz microbursts em níveis de sub-milissegundos e receber análise especializada em um evento específico é inestimável para arquitetos de comércio. Recomendações de aprovisionamento de largura de banda inteligente, como dimensionamento e análise do que é necessário, proporcionam maior agilidade para responder a condições de mercado voláteis. À medida que a explosão do comércio algorítmico e o aumento das taxas de mensagens continuam, o BQM, combinado com sua ferramenta de QoS, fornece a capacidade de implementar políticas de QoS que podem proteger aplicativos comerciais importantes.


Solução de monitoramento de latência de serviços financeiros da Cisco.


A Cisco e a Trading Metrics colaboraram em soluções de monitoramento de latência para fluxo de pedidos FIX e monitoramento de dados de mercado. A tecnologia Cisco AON é a base para uma nova classe de produtos e soluções integrados na rede que ajudam a mesclar redes inteligentes com infra-estrutura de aplicativos, com base em arquiteturas orientadas para serviços ou tradicionais. A Trading Metrics é um fornecedor líder de software de análise para infra-estrutura de rede e fins de monitoramento de latência de aplicativos (trademetrics /).


O Cisco AON Financial Services Latency Monitoring Solution (FSMS) correlacionou dois tipos de eventos no ponto de observação:


• Os eventos de rede correlacionaram-se diretamente com a manipulação de mensagens de aplicativos coincidentes.


• Fluxo de pedidos comerciais e eventos de atualização de mercado correspondentes.


Usando selos de tempo afirmados no ponto de captura na rede, a análise em tempo real desses fluxos de dados correlacionados permite a identificação precisa de estrangulamentos em toda a infraestrutura enquanto um comércio está sendo executado ou os dados do mercado estão sendo distribuídos. Ao monitorar e medir a latência no início do ciclo, as empresas financeiras podem tomar melhores decisões sobre qual serviço de rede - e qual intermediário, mercado ou contraparte - selecionar para rotear ordens comerciais. Da mesma forma, esse conhecimento permite um acesso mais simplificado aos dados de mercado atualizados (cotações de ações, notícias econômicas, etc.), que é uma base importante para iniciar, retirar ou buscar oportunidades de mercado.


Os componentes da solução são:


• Hardware AON em três fatores de forma:


- Módulo de rede AON para roteadores Cisco 2600/2800/3700/3800.


- AON Blade para a série Cisco Catalyst 6500.


- AON 8340 Appliance.


• O software M & amp; A 2.0 da Metrics de Negociação, que fornece o aplicativo de monitoramento e alerta, exibe gráficos de latência em um painel de controle e emite alertas quando ocorrem desacelerações (trademetrics / TM_brochure. pdf).


Figura 10 Monitoramento de latência FIX baseado em AON.


Cisco IP SLA.


O Cisco IP SLA é uma ferramenta de gerenciamento de rede incorporada no Cisco IOS, que permite roteadores e switches para gerar fluxos de tráfego sintéticos que podem ser medidos para latência, jitter, perda de pacotes e outros critérios (cisco / go / ipsla).


Dois conceitos-chave são a fonte do tráfego gerado e do alvo. Ambos executam um respondedor IP SLA, & quot; que tem a responsabilidade de anunciar o tráfego de controle antes que ele seja fornecido e retornado pelo alvo (para uma medida de ida e volta). Diversos tipos de tráfego podem ser obtidos dentro do IP SLA e visam métricas diferentes e direcionam diferentes serviços e aplicativos. A operação de jitter UDP é usada para medir o atraso de ida e volta e as variações do relatório. Como o tráfego é marcado com tempo nos dispositivos de envio e destino usando a capacidade de resposta, o atraso de ida e volta é caracterizado como o delta entre os dois timestamps.


Um novo recurso foi introduzido no IOS 12.3 (14) T, IP SLA Sub Millisecond Reporting, que permite que os marcadores de tempo sejam exibidos com uma resolução em microssegundos, proporcionando assim um nível de granularidade não disponível anteriormente. Este novo recurso tornou IP SLA relevante para redes de campus onde a latência da rede geralmente está na faixa de 300-800 microssegundos e a capacidade de detectar tendências e espinhas (tendências breves) com base em balcões de granularidade com microssegundos é um requisito para os clientes envolvidos no tempo Ambientes de negociação eletrônicos sensíveis.


Como resultado, o IP SLA agora está sendo considerado por um número significativo de organizações financeiras, pois todos eles enfrentam requisitos para:


• Informar a latência da linha de base para seus usuários.


• Tendência da latência da linha de base ao longo do tempo.


• Responda rapidamente a rajadas de tráfego que causam alterações na latência relatada.


O relatório de sub-milissegundos é necessário para esses clientes, uma vez que muitos campus e backbones estão atualmente entregando em um segundo de latência em diversos lúpulos alternativos. Os ambientes de negociação eletrônica geralmente funcionaram para eliminar ou minimizar todas as áreas de latência de dispositivos e redes para fornecer uma rápida realização de pedidos para o negócio. Informar que os tempos de resposta da rede são "apenas abaixo de um milissegundo" já não é suficiente; a granularidade das medições de latência relatadas em um segmento de rede ou backbone precisa ser mais próxima de 300-800 micro-segundos com um grau de resolução de 100 & igrave; segundos.


O IP SLA recentemente adicionou suporte para fluxos de teste de multicast IP, que podem medir a latência dos dados de mercado.


Uma topologia de rede típica é mostrada na Figura 11 com os roteadores de sombra IP SLA, fontes e respondedores.


Figura 11 Implantação IP SLA.


Serviços de computação.


Os serviços de computação abrangem uma ampla gama de tecnologias com o objetivo de eliminar a memória e os estrangulamentos da CPU criados pelo processamento de pacotes de rede. As aplicações de negociação consomem altos volumes de dados de mercado e os servidores têm que dedicar recursos ao processamento de tráfego de rede em vez de processamento de aplicativos.


• Processamento de transporte - Em altas velocidades, o processamento de pacotes de rede pode consumir uma quantidade significativa de ciclos de CPU e memória do servidor. Uma regra de padrão estabelecida indica que 1Gbps de largura de banda da rede requer 1 GHz de capacidade do processador (fonte do papel branco da Intel em aceleração de E / S intel / technology / ioacceleration / 306517.pdf).


• Cópia intermediária de buffer: em uma implementação de pilha de rede convencional, os dados precisam ser copiados pela CPU entre buffers de rede e buffers de aplicativos. Esta sobrecarga é agravada pelo fato de que as velocidades da memória não acompanharam os aumentos nas velocidades da CPU. Por exemplo, processadores como o Intel Xeon estão se aproximando de 4 GHz, enquanto os chips RAM passam por 400MHz (para a memória DDR 3200) (fonte Intel intel / technology / ioacceleration / 306517.pdf).


• Mudança de contexto - Toda vez que um pacote individual precisa ser processado, a CPU executa uma mudança de contexto do contexto do aplicativo para o contexto de tráfego da rede. Esta sobrecarga poderia ser reduzida se o interruptor ocorresse apenas quando o buffer de aplicação inteiro estiver completo.


Figura 12 Fontes de Sobrecarga em Servidores de Centro de Dados.


• TCP Offload Engine (TOE) - Descarrega os ciclos do processador de transporte para a NIC. Move as cópias do buffer de pilha do protocolo TCP / IP da memória do sistema para a memória NIC.


• Remote Direct Memory Access (RDMA) - permite que um adaptador de rede transfira dados diretamente do aplicativo para o aplicativo sem envolver o sistema operacional. Elimina cópias intermédias e de buffer de aplicativos (consumo de largura de banda de memória).


• bypass do kernel - acesso direto ao nível do usuário ao hardware. Diminui drasticamente os parâmetros do contexto do aplicativo.


Figura 13 RDMA e Kernel Bypass.


O InfiniBand é um link de comunicação serial bidirecional ponto-a-ponto (tecido comutado) que implementa o RDMA, entre outros recursos. A Cisco oferece um switch InfiniBand, o Server Fabric Switch (SFS): cisco / application / pdf / pt / us / guest / netsol / ns500 / c643 / cdccont_0900aecd804c35cb. pdf.


Figura 14 Implementação TFS típica.


As aplicações comerciais beneficiam da redução da latência e da variabilidade de latência, conforme provado por um teste realizado com os processadores de alimentação SFS e Wombat da Cisco pela Stac Research:


Serviço de Virtualização de Aplicativos.


Desacoplar o aplicativo do sistema operacional subjacente e o hardware do servidor permitem que eles sejam executados como serviços de rede. Um aplicativo pode ser executado em paralelo em vários servidores, ou vários aplicativos podem ser executados no mesmo servidor, como dita a melhor alocação de recursos. Essa dissociação permite um melhor balanceamento de carga e recuperação de desastres para estratégias de continuidade de negócios. O processo de reatribuição de recursos de computação a um aplicativo é dinâmico. Usando um sistema de virtualização de aplicativos como o GridServer da Data Synapse, os aplicativos podem migrar, usando políticas pré-configuradas, para servidores subutilizados em um processo de fornecimento-correspondência-demanda (networkworld / supp / 2005 / ndc1 / 022105virtual. html? Page = 2) .


Existem muitas vantagens comerciais para as empresas financeiras que adotam a virtualização de aplicativos:


• Tempo mais rápido para o mercado de novos produtos e serviços.


• Integração mais rápida das empresas após a atividade de fusão e aquisição.


• Maior disponibilidade de aplicativos.


• Melhor distribuição da carga de trabalho, que cria mais "sala principal" para processar picos no volume de negócios.


• Eficiência e controle operacional.


• Redução da complexidade de TI.


Atualmente, a virtualização de aplicativos não é usada no front-office de negociação. Um caso de uso é modelagem de risco, como simulações de Monte Carlo. As the technology evolves, it is conceivable that some the trading platforms will adopt it.


Data Virtualization Service.


To effectively share resources across distributed enterprise applications, firms must be able to leverage data across multiple sources in real-time while ensuring data integrity. With solutions from data virtualization software vendors such as Gemstone or Tangosol (now Oracle), financial firms can access heterogeneous sources of data as a single system image that enables connectivity between business processes and unrestrained application access to distributed caching. The net result is that all users have instant access to these data resources across a distributed network (gridtoday/03/0210/101061.html).


This is called a data grid and is the first step in the process of creating what Gartner calls Extreme Transaction Processing (XTP) (gartner/DisplayDocument? ref=g_search&id=500947). Technologies such as data and applications virtualization enable financial firms to perform real-time complex analytics, event-driven applications, and dynamic resource allocation.


One example of data virtualization in action is a global order book application. An order book is the repository of active orders that is published by the exchange or other market makers. A global order book aggregates orders from around the world from markets that operate independently. O maior desafio para a aplicação é a escalabilidade em relação à conectividade WAN porque tem que manter o estado. Today's data grids are localized in data centers connected by Metro Area Networks (MAN). This is mainly because the applications themselves have limits—they have been developed without the WAN in mind.


Figure 15 GemStone GemFire Distributed Caching.


Before data virtualization, applications used database clustering for failover and scalability. This solution is limited by the performance of the underlying database. Failover is slower because the data is committed to disc. With data grids, the data which is part of the active state is cached in memory, which reduces drastically the failover time. Scaling the data grid means just adding more distributed resources, providing a more deterministic performance compared to a database cluster.


Multicast Service.


Market data delivery is a perfect example of an application that needs to deliver the same data stream to hundreds and potentially thousands of end users. Market data services have been implemented with TCP or UDP broadcast as the network layer, but those implementations have limited scalability. Using TCP requires a separate socket and sliding window on the server for each recipient. UDP broadcast requires a separate copy of the stream for each destination subnet. Both of these methods exhaust the resources of the servers and the network. The server side must transmit and service each of the streams individually, which requires larger and larger server farms. On the network side, the required bandwidth for the application increases in a linear fashion. For example, to send a 1 Mbps stream to 1000recipients using TCP requires 1 Gbps of bandwidth.


IP multicast is the only way to scale market data delivery. To deliver a 1 Mbps stream to 1000 recipients, IP multicast would require 1 Mbps. The stream can be delivered by as few as two servers—one primary and one backup for redundancy.


There are two main phases of market data delivery to the end user. In the first phase, the data stream must be brought from the exchange into the brokerage's network. Typically the feeds are terminated in a data center on the customer premise. The feeds are then processed by a feed handler, which may normalize the data stream into a common format and then republish into the application messaging servers in the data center.


The second phase involves injecting the data stream into the application messaging bus which feeds the core infrastructure of the trading applications. The large brokerage houses have thousands of applications that use the market data streams for various purposes, such as live trades, long term trending, arbitrage, etc. Many of these applications listen to the feeds and then republish their own analytical and derivative information. For example, a brokerage may compare the prices of CSCO to the option prices of CSCO on another exchange and then publish ratings which a different application may monitor to determine how much they are out of synchronization.


Figure 16 Market Data Distribution Players.


The delivery of these data streams is typically over a reliable multicast transport protocol, traditionally Tibco Rendezvous. Tibco RV operates in a publish and subscribe environment. Each financial instrument is given a subject name, such as CSCO. last. Each application server can request the individual instruments of interest by their subject name and receive just a that subset of the information. This is called subject-based forwarding or filtering. Subject-based filtering is patented by Tibco.


A distinction should be made between the first and second phases of market data delivery. The delivery of market data from the exchange to the brokerage is mostly a one-to-many application. The only exception to the unidirectional nature of market data may be retransmission requests, which are usually sent using unicast. The trading applications, however, are definitely many-to-many applications and may interact with the exchanges to place orders.


Figure 17 Market Data Architecture.


Design Issues.


Number of Groups/Channels to Use.


Many application developers consider using thousand of multicast groups to give them the ability to divide up products or instruments into small buckets. Normally these applications send many small messages as part of their information bus. Usually several messages are sent in each packet that are received by many users. Sending fewer messages in each packet increases the overhead necessary for each message.


In the extreme case, sending only one message in each packet quickly reaches the point of diminishing returns—there is more overhead sent than actual data. Application developers must find a reasonable compromise between the number of groups and breaking up their products into logical buckets.


Consider, for example, the Nasdaq Quotation Dissemination Service (NQDS). The instruments are broken up alphabetically:


Another example is the Nasdaq Totalview service, broken up this way:


This approach allows for straight forward network/application management, but does not necessarily allow for optimized bandwidth utilization for most users. A user of NQDS that is interested in technology stocks, and would like to subscribe to just CSCO and INTL, would have to pull down all the data for the first two groups of NQDS. Understanding the way users pull down the data and then organize it into appropriate logical groups optimizes the bandwidth for each user.


In many market data applications, optimizing the data organization would be of limited value. Typically customers bring in all data into a few machines and filter the instruments. Using more groups is just more overhead for the stack and does not help the customers conserve bandwidth. Another approach might be to keep the groups down to a minimum level and use UDP port numbers to further differentiate if necessary. The other extreme would be to use just one multicast group for the entire application and then have the end user filter the data. In some situations this may be sufficient.


Intermittent Sources.


A common issue with market data applications are servers that send data to a multicast group and then go silent for more than 3.5 minutes. These intermittent sources may cause trashing of state on the network and can introduce packet loss during the window of time when soft state and then hardware shorts are being created.


PIM-Bidir or PIM-SSM.


The first and best solution for intermittent sources is to use PIM-Bidir for many-to-many applications and PIM-SSM for one-to-many applications.


Both of these optimizations of the PIM protocol do not have any data-driven events in creating forwarding state. That means that as long as the receivers are subscribed to the streams, the network has the forwarding state created in the hardware switching path.


Intermittent sources are not an issue with PIM-Bidir and PIM-SSM.


Null Packets.


In PIM-SM environments a common method to make sure forwarding state is created is to send a burst of null packets to the multicast group before the actual data stream. The application must efficiently ignore these null data packets to ensure it does not affect performance. The sources must only send the burst of packets if they have been silent for more than 3 minutes. A good practice is to send the burst if the source is silent for more than a minute. Many financials send out an initial burst of traffic in the morning and then all well-behaved sources do not have problems.


Periodic Keepalives or Heartbeats.


An alternative approach for PIM-SM environments is for sources to send periodic heartbeat messages to the multicast groups. This is a similar approach to the null packets, but the packets can be sent on a regular timer so that the forwarding state never expires.


S, G Expiry Timer.


Finally, Cisco has made a modification to the operation of the S, G expiry timer in IOS. There is now a CLI knob to allow the state for a S, G to stay alive for hours without any traffic being sent. The (S, G) expiry timer is configurable. This approach should be considered a workaround until PIM-Bidir or PIM-SSM is deployed or the application is fixed.


RTCP Feedback.


A common issue with real time voice and video applications that use RTP is the use of RTCP feedback traffic. Unnecessary use of the feedback option can create excessive multicast state in the network. If the RTCP traffic is not required by the application it should be avoided.


Fast Producers and Slow Consumers.


Today many servers providing market data are attached at Gigabit speeds, while the receivers are attached at different speeds, usually 100Mbps. This creates the potential for receivers to drop packets and request re-transmissions, which creates more traffic that the slowest consumers cannot handle, continuing the vicious circle.


The solution needs to be some type of access control in the application that limits the amount of data that one host can request. QoS and other network functions can mitigate the problem, but ultimately the subscriptions need to be managed in the application.


Tibco Heartbeats.


TibcoRV has had the ability to use IP multicast for the heartbeat between the TICs for many years. However, there are some brokerage houses that are still using very old versions of TibcoRV that use UDP broadcast support for the resiliency. This limitation is often cited as a reason to maintain a Layer 2 infrastructure between TICs located in different data centers. These older versions of TibcoRV should be phased out in favor of the IP multicast supported versions.


Multicast Forwarding Options.


PIM Sparse Mode.


The standard IP multicast forwarding protocol used today for market data delivery is PIM Sparse Mode. It is supported on all Cisco routers and switches and is well understood. PIM-SM can be used in all the network components from the exchange, FSP, and brokerage.


There are, however, some long-standing issues and unnecessary complexity associated with a PIM-SM deployment that could be avoided by using PIM-Bidir and PIM-SSM. These are covered in the next sections.


The main components of the PIM-SM implementation are:


• PIM Sparse Mode v2.


• Shared Tree (spt-threshold infinity)


A design option in the brokerage or in the exchange.


Details of Anycast RP can be found in:


The classic high availability design for Tibco in the brokerage network is documented in:


Bidirectional PIM.


PIM-Bidir is an optimization of PIM Sparse Mode for many-to-many applications. It has several key advantages over a PIM-SM deployment:


• Better support for intermittent sources.


• No data-triggered events.


One of the weaknesses of PIM-SM is that the network continually needs to react to active data flows. This can cause non-deterministic behavior that may be hard to troubleshoot. PIM-Bidir has the following major protocol differences over PIM-SM:


– No source registration.


Source traffic is automatically sent to the RP and then down to the interested receivers. There is no unicast encapsulation, PIM joins from the RP to the first hop router and then registration stop messages.


All PIM-Bidir traffic is forwarded on a *,G forwarding entry. The router does not have to monitor the traffic flow on a *,G and then send joins when the traffic passes a threshold.


– No need for an actual RP.


The RP does not have an actual protocol function in PIM-Bidir. The RP acts as a routing vector in which all the traffic converges. The RP can be configured as an address that is not assigned to any particular device. This is called a Phantom RP.


– No need for MSDP.


MSDP provides source information between RPs in a PIM-SM network. PIM-Bidir does not use the active source information for any forwarding decisions and therefore MSDP is not required.


Bidirectional PIM is ideally suited for the brokerage network in the data center of the exchange. Neste ambiente, existem muitas fontes que enviam para um conjunto relativamente pequeno de grupos em um padrão de trânsito muitos-para-muitos.


The key components of the PIM-Bidir implementation are:


Further details about Phantom RP and basic PIM-Bidir design are documented in:


Source Specific Multicast.


PIM-SSM is an optimization of PIM Sparse Mode for one-to-many applications. In certain environments it can offer several distinct advantages over PIM-SM. Like PIM-Bidir, PIM-SSM does not rely on any data-triggered events. Furthermore, PIM-SSM does not require an RP at all—there is no such concept in PIM-SSM. The forwarding information in the network is completely controlled by the interest of the receivers.


Source Specific Multicast is ideally suited for market data delivery in the financial service provider. The FSP can receive the feeds from the exchanges and then route them to the end of their network.


Many FSPs are also implementing MPLS and Multicast VPNs in their core. PIM-SSM is the preferred method for transporting traffic in VRFs.


When PIM-SSM is deployed all the way to the end user, the receiver indicates his interest in a particular S, G with IGMPv3. Even though IGMPv3 was defined by RFC 2236 back in October, 2002, it still has not been implemented by all edge devices. This creates a challenge for deploying an end-to-end PIM-SSM service. A transitional solution has been developed by Cisco to enable an edge device that supports IGMPv2 to participate in an PIM-SSM service. This feature is called SSM Mapping and is documented in:


Storage Services.


The service provides storage capabilities into the market data and trading environments. Trading applications access backend storage to connect to different databases and other repositories consisting of portfolios, trade settlements, compliance data, management applications, Enterprise Service Bus (ESB), and other critical applications where reliability and security is critical to the success of the business. The main requirements for the service are:


Storage virtualization is an enabling technology that simplifies management of complex infrastructures, enables non-disruptive operations, and facilitates critical elements of a proactive information lifecycle management (ILM) strategy. EMC Invista running on the Cisco MDS 9000 enables heterogeneous storage pooling and dynamic storage provisioning, allowing allocation of any storage to any application. High availability is increased with seamless data migration. Appropriate class of storage is allocated to point-in-time copies (clones). Storage virtualization is also leveraged through the use of Virtual Storage Area Networks (VSANs), which enable the consolidation of multiple isolated SANs onto a single physical SAN infrastructure, while still partitioning them as completely separate logical entities. VSANs provide all the security and fabric services of traditional SANs, yet give organizations the flexibility to easily move resources from one VSAN to another. This results in increased disk and network utilization while driving down the cost of management. Integrated Inter VSAN Routing (IVR) enables sharing of common resources across VSANs.


Figure 18 High Performance Computing Storage.


Replication of data to a secondary and tertiary data center is crucial for business continuance. Replication offsite over Fiber Channel over IP (FCIP) coupled with write acceleration and tape acceleration provides improved performance over long distance. Continuous Data Replication (CDP) is another mechanism which is gaining popularity in the industry. It refers to backup of computer data by automatically saving a copy of every change made to that data, essentially capturing every version of the data that the user saves. It allows the user or administrator to restore data to any point in time. Solutions from EMC and Incipient utilize the SANTap protocol on the Storage Services Module (SSM) in the MDS platform to provide CDP functionality. The SSM uses the SANTap service to intercept and redirect a copy of a write between a given initiator and target. The appliance does not reside in the data path—it is completely passive. The CDP solutions typically leverage a history journal that tracks all changes and bookmarks that identify application-specific events. This ensures that data at any point in time is fully self-consistent and is recoverable instantly in the event of a site failure.


Backup procedure reliability and performance are extremely important when storing critical financial data to a SAN. The use of expensive media servers to move data from disk to tape devices can be cumbersome. Network-accelerated serverless backup (NASB) helps you back up increased amounts of data in shorter backup time frames by shifting the data movement from multiple backup servers to Cisco MDS 9000 Series multilayer switches. This technology decreases impact on application servers because the MDS offloads the application and backup servers. It also reduces the number of backup and media servers required, thus reducing CAPEX and OPEX. The flexibility of the backup environment increases because storage and tape drives can reside anywhere on the SAN.


Trading Resilience and Mobility.


The main requirements for this service are to provide the virtual trader:


• Fully scalable and redundant campus trading environment.


• Resilient server load balancing and high availability in analytic server farms.


• Global site load balancing that provide the capability to continue participating in the market venues of closest proximity.


A highly-available campus environment is capable of sustaining multiple failures (i. e., links, switches, modules, etc.), which provides non-disruptive access to trading systems for traders and market data feeds. Fine-tuned routing protocol timers, in conjunction with mechanisms such as NSF/SSO, provide subsecond recovery from any failure.


The high-speed interconnect between data centers can be DWDM/dark fiber, which provides business continuance in case of a site failure. Each site is 100km-200km apart, allowing synchronous data replication. Usually the distance for synchronous data replication is 100km, but with Read/Write Acceleration it can stretch to 200km. A tertiary data center can be greater than 200km away, which would replicate data in an asynchronous fashion.


Figure 19 Trading Resilience.


A robust server load balancing solution is required for order routing, algorithmic trading, risk analysis, and other services to offer continuous access to clients regardless of a server failure. Multiple servers encompass a "farm" and these hosts can added/removed without disruption since they reside behind a virtual IP (VIP) address which is announced in the network.


A global site load balancing solution provides remote traders the resiliency to access trading environments which are closer to their location. This minimizes latency for execution times since requests are always routed to the nearest venue.


Figure 20 Virtualization of Trading Environment.


A trading environment can be virtualized to provide segmentation and resiliency in complex architectures. Figure 20 illustrates a high-level topology depicting multiple market data feeds entering the environment, whereby each vendor is assigned its own Virtual Routing and Forwarding (VRF) instance. The market data is transferred to a high-speed InfiniBand low-latency compute fabric where feed handlers, order routing systems, and algorithmic trading systems reside. All storage is accessed via a SAN and is also virtualized with VSANs, allowing further security and segmentation. The normalized data from the compute fabric is transferred to the campus trading environment where the trading desks reside.


Wide Area Application Services.


This service provides application acceleration and optimization capabilities for traders who are located outside of the core trading floor facility/data center and working from a remote office. To consolidate servers and increase security in remote offices, file servers, NAS filers, storage arrays, and tape drives are moved to a corporate data center to increase security and regulatory compliance and facilitate centralized storage and archival management. As the traditional trading floor is becoming more virtual, wide area application services technology is being utilized to provide a "LAN-like" experience to remote traders when they access resources at the corporate site. Traders often utilize Microsoft Office applications, especially Excel in addition to Sharepoint and Exchange. Excel is used heavily for modeling and permutations where sometime only small portions of the file are changed. CIFS protocol is notoriously known to be "chatty," where several message normally traverse the WAN for a simple file operation and it is addressed by Wide Area Application Service (WAAS) technology. Bloomberg and Reuters applications are also very popular financial tools which access a centralized SAN or NAS filer to retrieve critical data which is fused together before represented to a trader's screen.


Figure 21 Wide Area Optimization.


A pair of Wide Area Application Engines (WAEs) that reside in the remote office and the data center provide local object caching to increase application performance. The remote office WAEs can be a module in the ISR router or a stand-alone appliance. The data center WAE devices are load balanced behind an Application Control Engine module installed in a pair of Catalyst 6500 series switches at the aggregation layer. The WAE appliance farm is represented by a virtual IP address. The local router in each site utilizes Web Cache Communication Protocol version 2 (WCCP v2) to redirect traffic to the WAE that intercepts the traffic and determines if there is a cache hit or miss. The content is served locally from the engine if it resides in cache; otherwise the request is sent across the WAN the initial time to retrieve the object. This methodology optimizes the trader experience by removing application latency and shielding the individual from any congestion in the WAN.


WAAS uses the following technologies to provide application acceleration:


• Data Redundancy Elimination (DRE) is an advanced form of network compression which allows the WAE to maintain a history of previously-seen TCP message traffic for the purposes of reducing redundancy found in network traffic. This combined with the Lempel-Ziv (LZ) compression algorithm reduces the number of redundant packets that traverse the WAN, which improves application transaction performance and conserves bandwidth.


• Transport Flow Optimization (TFO) employs a robust TCP proxy to safely optimize TCP at the WAE device by applying TCP-compliant optimizations to shield the clients and servers from poor TCP behavior because of WAN conditions. By running a TCP proxy between the devices and leveraging an optimized TCP stack between the devices, many of the problems that occur in the WAN are completely blocked from propagating back to trader desktops. The traders experience LAN-like TCP response times and behavior because the WAE is terminating TCP locally. TFO improves reliability and throughput through increases in TCP window scaling and sizing enhancements in addition to superior congestion management.


Thin Client Service.


This service provides a "thin" advanced trading desktop which delivers significant advantages to demanding trading floor environments requiring continuous growth in compute power. As financial institutions race to provide the best trade executions for their clients, traders are utilizing several simultaneous critical applications that facilitate complex transactions. It is not uncommon to find three or more workstations and monitors at a trader's desk which provide visibility into market liquidity, trading venues, news, analysis of complex portfolio simulations, and other financial tools. In addition, market dynamics continue to evolve with Direct Market Access (DMA), ECNs, alternative trading volumes, and upcoming regulation changes with Regulation National Market System (RegNMS) in the US and Markets in Financial Instruments Directive (MiFID) in Europe. At the same time, business seeks greater control, improved ROI, and additional flexibility, which creates greater demands on trading floor infrastructures.


Traders no longer require multiple workstations at their desk. Thin clients consist of keyboard, mouse, and multi-displays which provide a total trader desktop solution without compromising security. Hewlett Packard, Citrix, Desktone, Wyse, and other vendors provide thin client solutions to capitalize on the virtual desktop paradigm. Thin clients de-couple the user-facing hardware from the processing hardware, thus enabling IT to grow the processing power without changing anything on the end user side. The workstation computing power is stored in the data center on blade workstations, which provide greater scalability, increased data security, improved business continuance across multiple sites, and reduction in OPEX by removing the need to manage individual workstations on the trading floor. One blade workstation can be dedicated to a trader or shared among multiple traders depending on the requirements for computer power.


The "thin client" solution is optimized to work in a campus LAN environment, but can also extend the benefits to traders in remote locations. Latency is always a concern when there is a WAN interconnecting the blade workstation and thin client devices. The network connection needs to be sized accordingly so traffic is not dropped if saturation points exist in the WAN topology. WAN Quality of Service (QoS) should prioritize sensitive traffic. There are some guidelines which should be followed to allow for an optimized user experience. A typical highly-interactive desktop experience requires a client-to-blade round trip latency of <20ms for a 2Kb packet size. There may be a slight lag in display if network latency is between 20ms to 40ms. A typical trader desk with a four multi-display terminal requires 2-3Mbps bandwidth consumption with seamless communication with blade workstation(s) in the data center. Streaming video (800x600 at 24fps/full color) requires 9 Mbps bandwidth usage.


Figure 22 Thin Client Architecture.


Management of a large thin client environment is simplified since a centralized IT staff manages all of the blade workstations dispersed across multiple data centers. A trader is redirected to the most available environment in the enterprise in the event of a particular site failure. High availability is a key concern in critical financial environments and the Blade Workstation design provides rapid provisioning of another blade workstation in the data center. This resiliency provides greater uptime, increases in productivity, and OpEx reduction.


Advanced Encryption Standard.


Advanced Message Queueing Protocol.


Application Oriented Networking.


The Archipelago® Integrated Web book gives investors the unique opportunity to view the entire ArcaEx and ArcaEdge books in addition to books made available by other market participants.


ECN Order Book feed available via NASDAQ.


Chicago Board of Trade.


Class-Based Weighted Fair Queueing.


Continuous Data Replication.


Chicago Mercantile Exchange is engaged in trading of futures contracts and derivatives.


Central Processing Unit.


Distributed Defect Tracking System.


Direct Market Access.


Data Redundancy Elimination.


Dense Wavelength Division Multiplexing.


Electronic Communication Network.


Enterprise Service Bus.


Enterprise Solutions Engineering.


FIX Adapted for Streaming.


Fibre Channel over IP.


Financial Information Exchange.


Financial Services Latency Monitoring Solution.


Financial Service Provider.


Information Lifecycle Management.


Instinet Island Book.


Internetworking Operating System.


Keyboard Video Mouse.


Low Latency Queueing.


Metro Area Network.


Multilayer Director Switch.


Diretoria de Mercados em Instrumentos Financeiros.


Message Passing Interface is an industry standard specifying a library of functions to enable the passing of messages between nodes within a parallel computing environment.


Network Attached Storage.


Network Accelerated Serverless Backup.


Network Interface Card.


Nasdaq Quotation Dissemination Service.


Sistema de gerenciamento de pedidos.


Open Systems Interconnection.


Protocol Independent Multicast.


PIM-Source Specific Multicast.


Qualidade de serviço.


Random Access Memory.


Reuters Data Feed.


Reuters Data Feed Direct.


Remote Direct Memory Access.


Regulation National Market System.


Remote Graphics Software.


Reuters Market Data System.


RTP Control Protocol.


Real Time Protocol.


Reuters Wire Format.


Storage Area Network.


Small Computer System Interface.


Sockets Direct Protocol—Given that many modern applications are written using the sockets API, SDP can intercept the sockets at the kernel level and map these socket calls to an InfiniBand transport service that uses RDMA operations to offload data movement from the CPU to the HCA hardware.


Server Fabric Switch.


Secure Financial Transaction Infrastructure network developed to provide firms with excellent communication paths to NYSE Group, AMEX, Chicago Stock Exchange, NASDAQ, and other exchanges. It is often used for order routing.


The LMAX Architecture.


LMAX is a new retail financial trading platform. As a result it has to process many trades with low latency. The system is built on the JVM platform and centers on a Business Logic Processor that can handle 6 million orders per second on a single thread. The Business Logic Processor runs entirely in-memory using event sourcing. The Business Logic Processor is surrounded by Disruptors - a concurrency component that implements a network of queues that operate without needing locks. During the design process the team concluded that recent directions in high-performance concurrency models using queues are fundamentally at odds with modern CPU design.


Over the last few years we keep hearing that "the free lunch is over"[1] - we can't expect increases in individual CPU speed. So to write fast code we need to explicitly use multiple processors with concurrent software. This is not good news - writing concurrent code is very hard. Locks and semaphores are hard to reason about and hard to test - meaning we are spending more time worrying about satisfying the computer than we are solving the domain problem. Various concurrency models, such as Actors and Software Transactional Memory, aim to make this easier - but there is still a burden that introduces bugs and complexity.


So I was fascinated to hear about a talk at QCon London in March last year from LMAX. LMAX is a new retail financial trading platform. Its business innovation is that it is a retail platform - allowing anyone to trade in a range of financial derivative products[2]. A trading platform like this needs very low latency - trades have to be processed quickly because the market is moving rapidly. A retail platform adds complexity because it has to do this for lots of people. So the result is more users, with lots of trades, all of which need to be processed quickly.[3]


Given the shift to multi-core thinking, this kind of demanding performance would naturally suggest an explicitly concurrent programming model - and indeed this was their starting point. But the thing that got people's attention at QCon was that this wasn't where they ended up. In fact they ended up by doing all the business logic for their platform: all trades, from all customers, in all markets - on a single thread. A thread that will process 6 million orders per second using commodity hardware.[4]


Processing lots of transactions with low-latency and none of the complexities of concurrent code - how can I resist digging into that? Fortunately another difference LMAX has to other financial companies is that they are quite happy to talk about their technological decisions. So now LMAX has been in production for a while it's time to explore their fascinating design.


Overall Structure.


Figure 1: LMAX's architecture in three blobs.


At a top level, the architecture has three parts.


business logic processor[5] input disruptor output disruptors.


As its name implies, the business logic processor handles all the business logic in the application. As I indicated above, it does this as a single-threaded java program which reacts to method calls and produces output events. Consequently it's a simple java program that doesn't require any platform frameworks to run other than the JVM itself, which allows it to be easily run in test environments.


Although the Business Logic Processor can run in a simple environment for testing, there is rather more involved choreography to get it to run in a production setting. Input messages need to be taken off a network gateway and unmarshaled, replicated and journaled. Output messages need to be marshaled for the network. These tasks are handled by the input and output disruptors. Unlike the Business Logic Processor, these are concurrent components, since they involve IO operations which are both slow and independent. They were designed and built especially for LMAX, but they (like the overall architecture) are applicable elsewhere.


Business Logic Processor.


Keeping it all in memory.


The Business Logic Processor takes input messages sequentially (in the form of a method invocation), runs business logic on it, and emits output events. It operates entirely in-memory, there is no database or other persistent store. Keeping all data in-memory has two important benefits. Firstly it's fast - there's no database to provide slow IO to access, nor is there any transactional behavior to execute since all the processing is done sequentially. The second advantage is that it simplifies programming - there's no object/relational mapping to do. All the code can be written using Java's object model without having to make any compromises for the mapping to a database.


Using an in-memory structure has an important consequence - what happens if everything crashes? Even the most resilient systems are vulnerable to someone pulling the power. The heart of dealing with this is Event Sourcing - which means that the current state of the Business Logic Processor is entirely derivable by processing the input events. As long as the input event stream is kept in a durable store (which is one of the jobs of the input disruptor) you can always recreate the current state of the business logic engine by replaying the events.


A good way to understand this is to think of a version control system. Version control systems are a sequence of commits, at any time you can build a working copy by applying those commits. VCSs are more complicated than the Business Logic Processor because they must support branching, while the Business Logic Processor is a simple sequence.


So, in theory, you can always rebuild the state of the Business Logic Processor by reprocessing all the events. In practice, however, that would take too long should you need to spin one up. So, just as with version control systems, LMAX can make snapshots of the Business Logic Processor state and restore from the snapshots. They take a snapshot every night during periods of low activity. Restarting the Business Logic Processor is fast, a full restart - including restarting the JVM, loading a recent snapshot, and replaying a days worth of journals - takes less than a minute.


Snapshots make starting up a new Business Logic Processor faster, but not quickly enough should a Business Logic Processor crash at 2pm. As a result LMAX keeps multiple Business Logic Processors running all the time[6]. Each input event is processed by multiple processors, but all but one processor has its output ignored. Should the live processor fail, the system switches to another one. This ability to handle fail-over is another benefit of using Event Sourcing.


By event sourcing into replicas they can switch between processors in a matter of micro-seconds. As well as taking snapshots every night, they also restart the Business Logic Processors every night. The replication allows them to do this with no downtime, so they continue to process trades 24/7.


For more background on Event Sourcing, see the draft pattern on my site from a few years ago. The article is more focused on handling temporal relationships rather than the benefits that LMAX use, but it does explain the core idea.


Event Sourcing is valuable because it allows the processor to run entirely in-memory, but it has another considerable advantage for diagnostics. If some unexpected behavior occurs, the team copies the sequence of events to their development environment and replays them there. This allows them to examine what happened much more easily than is possible in most environments.


This diagnostic capability extends to business diagnostics. There are some business tasks, such as in risk management, that require significant computation that isn't needed for processing orders. An example is getting a list of the top 20 customers by risk profile based on their current trading positions. The team handles this by spinning up a replicate domain model and carrying out the computation there, where it won't interfere with the core order processing. These analysis domain models can have variant data models, keep different data sets in memory, and run on different machines.


Tuning performance.


So far I've explained that the key to the speed of the Business Logic Processor is doing everything sequentially, in-memory. Just doing this (and nothing really stupid) allows developers to write code that can process 10K TPS[7]. They then found that concentrating on the simple elements of good code could bring this up into the 100K TPS range. This just needs well-factored code and small methods - essentially this allows Hotspot to do a better job of optimizing and for CPUs to be more efficient in caching the code as it's running.


It took a bit more cleverness to go up another order of magnitude. There are several things that the LMAX team found helpful to get there. One was to write custom implementations of the java collections that were designed to be cache-friendly and careful with garbage[8]. An example of this is using primitive java longs as hashmap keys with a specially written array backed Map implementation ( LongToObjectHashMap ). In general they've found that choice of data structures often makes a big difference, Most programmers just grab whatever List they used last time rather than thinking which implementation is the right one for this context.[9]


Another technique to reach that top level of performance is putting attention into performance testing. I've long noticed that people talk a lot about techniques to improve performance, but the one thing that really makes a difference is to test it. Even good programmers are very good at constructing performance arguments that end up being wrong, so the best programmers prefer profilers and test cases to speculation.[10] The LMAX team has also found that writing tests first is a very effective discipline for performance tests.


Programming Model.


This style of processing does introduce some constraints into the way you write and organize the business logic. The first of these is that you have to tease out any interaction with external services. An external service call is going to be slow, and with a single thread will halt the entire order processing machine. As a result you can't make calls to external services within the business logic. Instead you need to finish that interaction with an output event, and wait for another input event to pick it back up again.


I'll use a simple non-LMAX example to illustrate. Imagine you are making an order for jelly beans by credit card. A simple retailing system would take your order information, use a credit card validation service to check your credit card number, and then confirm your order - all within a single operation. The thread processing your order would block while waiting for the credit card to be checked, but that block wouldn't be very long for the user, and the server can always run another thread on the processor while it's waiting.


In the LMAX architecture, you would split this operation into two. The first operation would capture the order information and finish by outputting an event (credit card validation requested) to the credit card company. The Business Logic Processor would then carry on processing events for other customers until it received a credit-card-validated event in its input event stream. On processing that event it would carry out the confirmation tasks for that order.


Working in this kind of event-driven, asynchronous style, is somewhat unusual - although using asynchrony to improve the responsiveness of an application is a familiar technique. It also helps the business process be more resilient, as you have to be more explicit in thinking about the different things that can happen with the remote application.


A second feature of the programming model lies in error handling. The traditional model of sessions and database transactions provides a helpful error handling capability. Should anything go wrong, it's easy to throw away everything that happened so far in the interaction. Session data is transient, and can be discarded, at the cost of some irritation to the user if in the middle of something complicated. If an error occurs on the database side you can rollback the transaction.


LMAX's in-memory structures are persistent across input events, so if there is an error it's important to not leave that memory in an inconsistent state. However there's no automated rollback facility. As a consequence the LMAX team puts a lot of attention into ensuring the input events are fully valid before doing any mutation of the in-memory persistent state. They have found that testing is a key tool in flushing out these kinds of problems before going into production.


Input and Output Disruptors.


Although the business logic occurs in a single thread, there are a number tasks to be done before we can invoke a business object method. The original input for processing comes off the wire in the form of a message, this message needs to be unmarshaled into a form convenient for Business Logic Processor to use. Event Sourcing relies on keeping a durable journal of all the input events, so each input message needs to be journaled onto a durable store. Finally the architecture relies on a cluster of Business Logic Processors, so we have to replicate the input messages across this cluster. Similarly on the output side, the output events need to be marshaled for transmission over the network.


Figure 2: The activities done by the input disruptor (using UML activity diagram notation)


The replicator and journaler involve IO and therefore are relatively slow. After all the central idea of Business Logic Processor is that it avoids doing any IO. Also these three tasks are relatively independent, all of them need to be done before the Business Logic Processor works on a message, but they can done in any order. So unlike with the Business Logic Processor, where each trade changes the market for subsequent trades, there is a natural fit for concurrency.


To handle this concurrency the LMAX team developed a special concurrency component, which they call a Disruptor [11].


The LMAX team have released the source code for the Disruptor with an open source licence.


At a crude level you can think of a Disruptor as a multicast graph of queues where producers put objects on it that are sent to all the consumers for parallel consumption through separate downstream queues. When you look inside you see that this network of queues is really a single data structure - a ring buffer. Each producer and consumer has a sequence counter to indicate which slot in the buffer it's currently working on. Each producer/consumer writes its own sequence counter but can read the others' sequence counters. This way the producer can read the consumers' counters to ensure the slot it wants to write in is available without any locks on the counters. Similarly a consumer can ensure it only processes messages once another consumer is done with it by watching the counters.


Figure 3: The input disruptor coordinates one producer and four consumers.


Output disruptors are similar but they only have two sequential consumers for marshaling and output.[12] Output events are organized into several topics, so that messages can be sent to only the receivers who are interested in them. Each topic has its own disruptor.


The disruptors I've described are used in a style with one producer and multiple consumers, but this isn't a limitation of the design of the disruptor. The disruptor can work with multiple producers too, in this case it still doesn't need locks.[13]


A benefit of the disruptor design is that it makes it easier for consumers to catch up quickly if they run into a problem and fall behind. If the unmarshaler has a problem when processing on slot 15 and returns when the receiver is on slot 31, it can read data from slots 16-30 in one batch to catch up. This batch read of the data from the disruptor makes it easier for lagging consumers to catch up quickly, thus reducing overall latency.


I've described things here, with one each of the journaler, replicator, and unmarshaler - this indeed is what LMAX does. But the design would allow multiple of these components to run. If you ran two journalers then one would take the even slots and the other journaler would take the odd slots. This allows further concurrency of these IO operations should this become necessary.


The ring buffers are large: 20 million slots for input buffer and 4 million slots for each of the output buffers. The sequence counters are 64bit long integers that increase monotonically even as the ring slots wrap.[14] The buffer is set to a size that's a power of two so the compiler can do an efficient modulus operation to map from the sequence counter number to the slot number. Like the rest of the system, the disruptors are bounced overnight. This bounce is mainly done to wipe memory so that there is less chance of an expensive garbage collection event during trading. (I also think it's a good habit to regularly restart, so that you rehearse how to do it for emergencies.)


The journaler's job is to store all the events in a durable form, so that they can be replayed should anything go wrong. LMAX does not use a database for this, just the file system. They stream the events onto the disk. In modern terms, mechanical disks are horribly slow for random access, but very fast for streaming - hence the tag-line "disk is the new tape".[15]


Earlier on I mentioned that LMAX runs multiple copies of its system in a cluster to support rapid failover. The replicator keeps these nodes in sync. All communication in LMAX uses IP multicasting, so clients don't need to know which IP address is the master node. Only the master node listens directly to input events and runs a replicator. The replicator broadcasts the input events to the slave nodes. Should the master node go down, it's lack of heartbeat will be noticed, another node becomes master, starts processing input events, and starts its replicator. Each node has its own input disruptor and thus has its own journal and does its own unmarshaling.


Even with IP multicasting, replication is still needed because IP messages can arrive in a different order on different nodes. The master node provides a deterministic sequence for the rest of the processing.


The unmarshaler turns the event data from the wire into a java object that can be used to invoke behavior on the Business Logic Processor. Therefore, unlike the other consumers, it needs to modify the data in the ring buffer so it can store this unmarshaled object. The rule here is that consumers are permitted to write to the ring buffer, but each writable field can only have one parallel consumer that's allowed to write to it. This preserves the principle of only having a single writer. [16]


Figure 4: The LMAX architecture with the disruptors expanded.


The disruptor is a general purpose component that can be used outside of the LMAX system. Usually financial companies are very secretive about their systems, keeping quiet even about items that aren't germane to their business. Not just has LMAX been open about its overall architecture, they have open-sourced the disruptor code - an act that makes me very happy. Not just will this allow other organizations to make use of the disruptor, it will also allow for more testing of its concurrency properties.


Queues and their lack of mechanical sympathy.


The LMAX architecture caught people's attention because it's a very different way of approaching a high performance system to what most people are thinking about. So far I've talked about how it works, but haven't delved too much into why it was developed this way. This tale is interesting in itself, because this architecture didn't just appear. It took a long time of trying more conventional alternatives, and realizing where they were flawed, before the team settled on this one.


Most business systems these days have a core architecture that relies on multiple active sessions coordinated through a transactional database. The LMAX team were familiar with this approach, and confident that it wouldn't work for LMAX. This assessment was founded in the experiences of Betfair - the parent company who set up LMAX. Betfair is a betting site that allows people to bet on sporting events. It handles very high volumes of traffic with a lot of contention - sports bets tend to burst around particular events. To make this work they have one of the hottest database installations around and have had to do many unnatural acts in order to make it work. Based on this experience they knew how difficult it was to maintain Betfair's performance and were sure that this kind of architecture would not work for the very low latency that a trading site would require. As a result they had to find a different approach.


Their initial approach was to follow what so many are saying these days - that to get high performance you need to use explicit concurrency. For this scenario, this means allowing orders to be processed by multiple threads in parallel. However, as is often the case with concurrency, the difficulty comes because these threads have to communicate with each other. Processing an order changes market conditions and these conditions need to be communicated.


The approach they explored early on was the Actor model and its cousin SEDA. The Actor model relies on independent, active objects with their own thread that communicate with each other via queues. Many people find this kind of concurrency model much easier to deal with than trying to do something based on locking primitives.


The team built a prototype exchange using the actor model and did performance tests on it. What they found was that the processors spent more time managing queues than doing the real logic of the application. Queue access was a bottleneck.


When pushing performance like this, it starts to become important to take account of the way modern hardware is constructed. The phrase Martin Thompson likes to use is "mechanical sympathy". The term comes from race car driving and it reflects the driver having an innate feel for the car, so they are able to feel how to get the best out of it. Many programmers, and I confess I fall into this camp, don't have much mechanical sympathy for how programming interacts with hardware. What's worse is that many programmers think they have mechanical sympathy, but it's built on notions of how hardware used to work that are now many years out of date.


One of the dominant factors with modern CPUs that affects latency, is how the CPU interacts with memory. These days going to main memory is a very slow operation in CPU-terms. CPUs have multiple levels of cache, each of which of is significantly faster. So to increase speed you want to get your code and data in those caches.


At one level, the actor model helps here. You can think of an actor as its own object that clusters code and data, which is a natural unit for caching. But actors need to communicate, which they do through queues - and the LMAX team observed that it's the queues that interfere with caching.


The explanation runs like this: in order to put some data on a queue, you need to write to that queue. Similarly, to take data off the queue, you need to write to the queue to perform the removal. This is write contention - more than one client may need to write to the same data structure. To deal with the write contention a queue often uses locks. But if a lock is used, that can cause a context switch to the kernel. When this happens the processor involved is likely to lose the data in its caches.


The conclusion they came to was that to get the best caching behavior, you need a design that has only one core writing to any memory location[17]. Multiple readers are fine, processors often use special high-speed links between their caches. But queues fail the one-writer principle.


This analysis led the LMAX team to a couple of conclusions. Firstly it led to the design of the disruptor, which determinedly follows the single-writer constraint. Secondly it led to idea of exploring the single-threaded business logic approach, asking the question of how fast a single thread can go if it's freed of concurrency management.


The essence of working on a single thread, is to ensure that you have one thread running on one core, the caches warm up, and as much memory access as possible goes to the caches rather than to main memory. This means that both the code and the working set of data needs to be as consistently accessed as possible. Also keeping small objects with code and data together allows them to be swapped between the caches as a unit, simplifying the cache management and again improving performance.


An essential part of the path to the LMAX architecture was the use of performance testing. The consideration and abandonment of an actor-based approach came from building and performance testing a prototype. Similarly much of the steps in improving the performance of the various components were enabled by performance tests. Mechanical sympathy is very valuable - it helps to form hypotheses about what improvements you can make, and guides you to forward steps rather than backward ones - but in the end it's the testing gives you the convincing evidence.


Performance testing in this style, however, is not a well-understood topic. Regularly the LMAX team stresses that coming up with meaningful performance tests is often harder than developing the production code. Again mechanical sympathy is important to developing the right tests. Testing a low level concurrency component is meaningless unless you take into account the caching behavior of the CPU.


One particular lesson is the importance of writing tests against null components to ensure the performance test is fast enough to really measure what real components are doing. Writing fast test code is no easier than writing fast production code and it's too easy to get false results because the test isn't as fast as the component it's trying to measure.


Should you use this architecture?


At first glance, this architecture appears to be for a very small niche. After all the driver that led to it was to be able to run lots of complex transactions with very low latency - most applications don't need to run at 6 million TPS.


But the thing that fascinates me about this application, is that they have ended up with a design which removes much of the programming complexity that plagues many software projects. The traditional model of concurrent sessions surrounding a transactional database isn't free of hassles. There's usually a non-trivial effort that goes into the relationship with the database. Object/relational mapping tools can help much of the pain of dealing with a database, but it doesn't deal with it all. Most performance tuning of enterprise applications involves futzing around with SQL.


These days, you can get more main memory into your servers than us old guys could get as disk space. More and more applications are quite capable of putting all their working set in main memory - thus eliminating a source of both complexity and sluggishness. Event Sourcing provides a way to solve the durability problem for an in-memory system, running everything in a single thread solves the concurrency issue. The LMAX experience suggests that as long as you need less than a few million TPS, you'll have enough performance headroom.


There is a considerable overlap here with the growing interest in CQRS. An event sourced, in-memory processor is a natural choice for the command-side of a CQRS system. (Although the LMAX team does not currently use CQRS.)


So what indicates you shouldn't go down this path? This is always a tricky questions for little-known techniques like this, since the profession needs more time to explore its boundaries. A starting point, however, is to think of the characteristics that encourage the architecture.


One characteristic is that this is a connected domain where processing one transaction always has the potential to change how following ones are processed. With transactions that are more independent of each other, there's less need to coordinate, so using separate processors running in parallel becomes more attractive.


LMAX concentrates on figuring the consequences of how events change the world. Many sites are more about taking an existing store of information and rendering various combinations of that information to as many eyeballs as they can find - eg think of any media site. Here the architectural challenge often centers on getting your caches right.


Another characteristic of LMAX is that this is a backend system, so it's reasonable to consider how applicable it would be for something acting in an interactive mode. Increasingly web application are helping us get used to server systems that react to requests, an aspect that does fit in well with this architecture. Where this architecture goes further than most such systems is its absolute use of asynchronous communications, resulting in the changes to the programming model that I outlined earlier.


These changes will take some getting used to for most teams. Most people tend to think of programming in synchronous terms and are not used to dealing with asynchrony. Yet it's long been true that asynchronous communication is an essential tool for responsiveness. It will be interesting to see if the wider use of asynchronous communication in the javascript world, with AJAX and node. js, will encourage more people to investigate this style. The LMAX team found that while it took a bit of time to adjust to asynchronous style, it soon became natural and often easier. In particular error handling was much easier to deal with under this approach.


The LMAX team certainly feels that the days of the coordinating transactional database are numbered. The fact that you can write software more easily using this kind of architecture and that it runs more quickly removes much of the justification for the traditional central database.


For my part, I find this a very exciting story. Much of my goal is to concentrate on software that models complex domains. An architecture like this provides good separation of concerns, allowing people to focus on Domain-Driven Design and keeping much of the platform complexity well separated. The close coupling between domain objects and databases has always been an irritation - approaches like this suggest a way out.


For articles on similar topics…


…take a look at the following tags:


1: The Free Lunch is Over.


This is the title of a famous essay by Herb Sutter. He describes the "free lunch" as the ever increasing clock speed of processors that regularly gave us more CPU performance every year. His point was that such clock cycle increases were no longer going to happen, instead performance increases would come in terms of multiple cores. But to take advantage of multiple cores, you need software that is capable of working concurrently - so without a shift in programming style people would no longer get the performance lunch for free.


2: I shall remain silent on what I think about the value of this innovation.


3: User Base.


All trading systems need low latency, since one trade can affect later trades and there's a lot of competition based on rapid reaction. Most trading platforms are for professionals - banks, brokers, etc - and typically have hundreds of users. A retail system has the potential for many more users, Betfair has millions of users and LMAX is designed for that scale. (The LMAX team isn't allowed to disclose its actual volumes.)


As it turns out, although a retail system has a lot of users, most of the activity in comes from market makers. During volatile periods an instrument can get hundreds of updates per second, with unusual micro-bursts of hundreds of transactions within a single microsecond.


4: Hardware.


The 6 million TPS benchmark was measured on a 3Ghz dual-socket quad-core Nehalem based Dell server with 32GB RAM.


5: The team does not use the name Business Logic Processor, in fact they have no name for that component, just referring to it as the business logic or core services. I've given it a name to make it easier to talk about in this article.


6: Currently LMAX runs two Business Logic Processors in its main data center and a third at a disaster recovery site. All three process input events.


7: What's in a transaction.


When people talk about transaction timing, one of the problems is what exactly is in a transaction. In some cases it's little more than inserting a new record in a database. LMAX's transactions are reasonably complex, more complex than a typical retail sale.


Placing an order in an exchange involves:


checking the target market is open to take orders checking the order is valid for that market choosing the right matching policy for the type of order sequencing the order so that each order is matched at the best possible price and matched with the right liquidity creating and publicizing the trades made as a consequence of the match updating prices based on the new trades.


8: At this scale of latency, you have to be aware of the garbage collector. For almost all systems these days, a modern GC compaction isn't going to have any noticeable effect on performance. However when you are trying to process millions of transactions per second with minimum jitter, a GC pause becomes a problem. The thing to remember is that short lived objects are ok, as they get collected quickly. So are objects that are permanent, since they will live for ever. The problematic objects are those that will get promoted to an older generation, but will eventually die. As this fragments the older generation region, it will trigger the compaction.


9: I rarely think about which collection implementation to use. This is perfectly reasonable when you're not in performance critical code. Different contexts suggest different behavior.


10: An interesting side-note. While the LMAX team shares much of the current interest in functional programming, they believe that the OO approach provides a better approach for this kind of problem. They've noticed that as they work to write faster code, they move away from a functional style towards OO style. Partly this because of the copying of data that functional styles require to maintain immutability. But it's also because objects provide a better model of a complex domain with a richer choice of data structures.


11: The name "disruptor" was inspired from a couple of sources. One is the the fact that the LMAX team sees this component as something that disrupts current thinking on concurrency. The other is a response to the fact that Java is introducing a phaser, so it's natural to include disruptors too.


12: It would be possible to journal the output events too. This would have the advantage of not needing to recalculate them should they need to be replayed for downstream services. In practice, however, this isn't worthwhile. The business logic is deterministic and very fast, so there's no gain from storing the results.


13: Although it does need to use CAS instructions in this case. See the disruptor technical paper for more information.


14: This does mean that if they process a billion transactions per second the counter will wrap in 292 years, causing some hell to break loose. They have decided that fixing this is not a high priority.


15: SSDs are better at random access, but a disk-like IO system slows them down.


16: Another complication when writing fields is you have to ensure that any fields being written to are separated into different cache lines.


17: Ensuring a single writer to a memory location.


A complication in following the single-writer principle is that processors don't grab memory one location at a time. Rather they sweep up multiple contiguous locations, called a cache line , into cache in one go. Accessing memory in cache line chunks is obviously more efficient, but also means that you have to ensure you don't have locations within that cache line that are written by different cores. So, for example, the Disruptor's sequence counter are padded to ensure they appear in separate cache lines.


Agradecimentos.


Financial institutions are usually secretive with their technical work, usually with little reason. This is a problem as it hampers the ability for the profession to learn from experience. So I'm especially thankful for LMAX's openness in discussing their experiences - both with this article and in their other material.


The main creators of the Disruptor are Martin Thompson, Mike Barker, and Dave Farley.


Martin Thompson and Dave Farley gave me a detailed walk-through of the LMAX architecture that served as the basis for this article. They also responded swiftly to email questions to improve my early drafts.


Concurrent programming is a tricky field that requires lots of attention to be competent at - and I have not put that effort in. As a result I'm entirely dependent upon others for understanding on concurrency and am thankful for their patient advice.


Leitura adicional.


If you'd prefer a video description of the LMAX architecture from LMAX team members, your best bet is the QCon presentation given in San Francisco 2010 by Martin Thompson and Michael Barker.


The source code for the Disruptor is available as open source. There is also a good technical paper (pdf) that goes into more depth as well as a collection of blogs and articles on it.


Various members of the LMAX team have their own blogs: Martin Thompson, Michael Barker, and Trisha Gee.


11 Best Practices for Low Latency Systems.


Its been 8 years since Google noticed that an extra 500ms of latency dropped traffic by 20% and Amazon realized that 100ms of extra latency dropped sales by 1% . Ever since then developers have been racing to the bottom of the latency curve, culminating in front-end developers squeezing every last millisecond out of their JavaScript , CSS , and even HTML . What follows is a random walk through a variety of best practices to keep in mind when designing low latency systems. Most of these suggestions are taken to the logical extreme but of course tradeoffs can be made. (Thanks to an anonymous user for asking this question on Quora and getting me to put my thoughts down in writing).


Choose the right language.


Scripting languages need not apply. Though they keep getting faster and faster, when you are looking to shave those last few milliseconds off your processing time you cannot have the overhead of an interpreted language. Additionally, you will want a strong memory model to enable lock free programming so you should be looking at Java, Scala, C++11 or Go.


Keep it all in memory.


I/O will kill your latency, so make sure all of your data is in memory. This generally means managing your own in-memory data structures and maintaining a persistent log, so you can rebuild the state after a machine or process restart. Some options for a persistent log include Bitcask, Krati, LevelDB and BDB-JE. Alternatively, you might be able to get away with running a local, persisted in-memory database like redis or MongoDB (with memory >> data). Note that you can loose some data on crash due to their background syncing to disk.


Keep data and processing colocated.


Network hops are faster than disk seeks but even still they will add a lot of overhead. Ideally, your data should fit entirely in memory on one host. With AWS providing almost 1/4 TB of RAM in the cloud and physical servers offering multiple TBs this is generally possible. If you need to run on more than one host you should ensure that your data and requests are properly partitioned so that all the data necessary to service a given request is available locally.


Keep the system underutilized.


Low latency requires always having resources to process the request. Don’t try to run at the limit of what your hardware/software can provide. Always have lots of head room for bursts and then some.


Keep context switches to a minimum.


Context switches are a sign that you are doing more compute work than you have resources for. You will want to limit your number of threads to the number of cores on your system and to pin each thread to its own core.


Keep your reads sequential.


All forms of storage, wither it be rotational, flash based, or memory performs significantly better when used sequentially. When issuing sequential reads to memory you trigger the use of prefetching at the RAM level as well as at the CPU cache level. If done properly, the next piece of data you need will always be in L1 cache right before you need it. The easiest way to help this process along is to make heavy use of arrays of primitive data types or structs. Following pointers, either through use of linked lists or through arrays of objects, should be avoided at all costs.


Batch your writes.


This sounds counterintuitive but you can gain significant improvements in performance by batching writes. However, there is a misconception that this means the system should wait an arbitrary amount of time before doing a write. Instead, one thread should spin in a tight loop doing I/O. Each write will batch all the data that arrived since the last write was issued. This makes for a very fast and adaptive system.


Respect your cache.


With all of these optimizations in place, memory access quickly becomes a bottleneck. Pinning threads to their own cores helps reduce CPU cache pollution and sequential I/O also helps preload the cache. Beyond that, you should keep memory sizes down using primitive data types so more data fits in cache. Additionally, you can look into cache-oblivious algorithms which work by recursively breaking down the data until it fits in cache and then doing any necessary processing.


Non blocking as much as possible.


Make friends with non blocking and wait free data structures and algorithms. Every time you use a lock you have to go down the stack to the OS to mediate the lock which is a huge overhead. Often, if you know what you are doing, you can get around locks by understanding the memory model of the JVM, C++11 or Go.


Async as much as possible.


Any processing and particularly any I/O that is not absolutely necessary for building the response should be done outside the critical path.


Parallelize as much as possible.


Any processing and particularly any I/O that can happen in parallel should be done in parallel. For instance if your high availability strategy includes logging transactions to disk and sending transactions to a secondary server those actions can happen in parallel.


Almost all of this comes from following what LMAX is doing with their Disruptor project. Read up on that and follow anything that Martin Thompson does.


Compartilhar isso:


Relacionados.


Publicado por.


Benjamin Darfler.


29 thoughts on “11 Best Practices for Low Latency Systems”


And happy to be on your list 🙂


Bom artigo. One beef: Go doesn’t have a sophisticated memory model like Java or C++11. If your system fits with the go-routine and channels architecture it’s all good else no luck. AFAIK you cannot opt out of the run-time scheduler so no native OS threads and the ability to build your own lock free data structures like (SPSC queues/ring-buffers) is also severely lacking.


Thanks for the reply. While the Go memory model (golang/ref/mem) might not be as robust as Java or C++11 I was under the impression that you could still manage to create lock free data structures using it. For example github/textnode/gringo, github/scryner/lfreequeue and github/mocchira/golfhash. Maybe I’m missing something? Admittedly I know much less about Go than the JVM.


Benjamin, the Go memory model detailed here: golang/ref/mem is mostly in terms of channels and mutexes. I looked through the packages that you listed and while the data structures there are “lock free” they are not equivalent to what one might build in Java/C++11. The sync package as of now, doesn’t have support for relaxed atomics or the acquire/release semantics of C++11. Without that support its difficult to build SPSC data structures as efficient as the ones possible in C++/Java. The projects that you link use atomic. Add… which is a sequentially consistent atomic. It’s built with XADD as it should be – github/tonnerre/golang/blob/master/src/pkg/sync/atomic/asm_amd64.s.


I am not trying to knock Go down. It takes minimal effort to write async IO and concurrent.


code that is sufficiently fast for most people. The std library too is highly tuned for performance. Golang also has support for structs which is missing in Java. But as it stands, I think the simplistic memory model and the go-routine runtime stand in the way of building the kind of systems you are talking about.


Thank you for the in depth reply. I hope people find this back and forth useful.


While a ‘native’ language is probably better, it’s not strictly required. Facebook showed us it can be done in PHP. Granted they use pre-compiled PHP with their HHVM machine. But it’s possible!


Unfortunately PHP still lacks an acceptable memory model even if HHVM significantly the improved execution speed.


While I’ll fight to use higher-level languages just as much as the next guy, I think the only way to achieve the low-latency apps people are looking for is to drop down to a language like C. It seems the tougher it is to write in a language, the faster it executes.


I would strongly recommend you look at the work being done in the projects and blogs that I linked to. The JVM is quickly becoming the hot spot for these types of systems because it provides a strong memory model and garbage collection which enable lock free programming which is nearly or completely impossible with a weak or undefined memory model and reference counters for memory management.


I’ll take a look, Benjamin. Obrigado por apontá-los.


Garbage collection for lock free programming is a bit of a deus ex machina. MPMC and SPSC queues can both be built without needing GC. There are also plenty of ways to do lock free programming without garbage collection and reference counting is not the only way. Hazard pointers, RCU, Proxy-Collectors etc all provide support for deferred reclamation and are usually coded in support of an algorithm (not generic), hence they are usually much easier to build. Of course the trade-off lies in the fact that production quality GCs have a lot of work put into them and will help the less experienced programmer write lock-free algorithms (should they be doing this at all?) without coding up deferred reclamation schemes. Some links on work done in this field: cs. toronto. edu/


Yes C/C++ just recently gained a memory model, but that doesn’t mean that they were completely unsuitable for lock-free code earlier. GCC and other high quality compilers had compiler specific directives to do lock free programming on supported platforms for a really long time – it was just not standardized in the language. Linux and other platforms have provided these primitives for some time too. Java’s unique position WAS that it provided a formalized memory model that it guaranteed to work on all supported platforms. Though in principle this is awesome, most server side developers work on one platform (Linux/Windows). They already had the tools to build lock free code for their platform.


GC is a great tool but not a necessary one. It has a cost both in terms of performance and in complexity (all the tricks needed to avoid STW GC). C++11/C11 already have support for proper memory models. Let’s not forget that JVMs have no responsibility to support the Unsafe API in the future. Unsafe code is “unsafe” so you lose the benefits of Java’s safety features. Finally IMO the Unsafe code used to layout memory and simulate structs in Java looks a lot uglier than C/C++ structs where the compiler is doing that work for you in a reliable manner. C and C++ also provide access to all the low level platform specific power tools like the PAUSE ins, SSE/AVX/NEON etc. You can even tune your code layout through linker scripts! The power provided by the C/C++ tool chain is really unmatched by the JVM. Java is a great platform none the less, but I think it’s biggest advantage is that ordinary business logic (90% of your code?) can still depend on GC and safety features and make use of highly tuned and tested libraries written with Unsafe. This is a great trade-off between getting the last 5% of perf and being productive. A trade-off that makes sense for a lot of people but a trade-off none the less. Writing complicated application code in C/C++ is a nightmare after all.


On Mon, Mar 10, 2014 at 12:52 PM, CodeDependents wrote:


& gt; Graham Swan commented: “I’ll take a look, Benjamin. Thanks for > pointing them out.”


Missing the 12th: Do not use garbadge collected languages. GC is a bottleneck in the worstcase. It likely halts all threads. It’s a global. It distracts the architect to manage one of the most crital resources (CPU-near memory) himself.


Actually a lot of this work comes directly from Java. To do lock free programming right you need a clear memory model which c++ only recently gained. If you know how to work with GC and not against it you can create low latency systems often with much more ease.


I have to agree with Ben here. There has been a lot of progress in GC parallelism in the last decade or so with the G1 collector being the latest incantation. It may take a little time to tune the heap and various knobs to get the GC to collect with almost no pause, but this pales in comparison to the developer time it takes to not have GC.


You can even go one step further and create systems that produce so little garbage that you can easily push your GC outside of your operating window. This is how all of the high frequency trading shops do it when running on the JVM.


Garbage collection for lock free programming is a bit of a deus ex machina. MPMC and SPSC queues can both be built without needing GC. There are also plenty of ways to do lock free programming without garbage collection and reference counting is not the only way. Hazard pointers, RCU, Proxy-Collectors etc all provide support for deferred reclamation and are coded in support of an algorithm (not generic), hence they are much easier to build. Of course the trade-off lies in the fact that production quality GCs have a lot of work put into them and will help the less experienced programmer write lock-free algorithms (should they be doing this at all?) without coding up deferred reclamation schemes. Some links on work done in this field: cs. toronto. edu/


Yes C/C++ just recently gained a memory model, but that doesn’t mean that they were completely unsuitable for lock-free code earlier. GCC and other high quality compilers had compiler specific directives to do lock free programming on supported platforms for a really long time – it was just not standardized in the language. Linux and other platforms have provided these primitives for some time too. Java’s unique position WAS that it provided a formalized memory model that it guaranteed to work on all supported platforms. Though in principle this is awesome, most server side developers work on one platform (Linux/Windows). They already had the tools to build lock free code for their platform.


GC is a great tool but not a necessary one. It has a cost both in terms of performance and in complexity (all the tricks needed to delay and avoid STW GC). C++11/C11 already have support for proper memory models. Let’s not forget that JVMs have no responsibility to support the Unsafe API in the future. Unsafe code is “unsafe” so you lose the benefits of Java’s safety features. Finally IMO the Unsafe code used to layout memory and simulate structs in Java looks a lot uglier than C/C++ structs where the compiler is doing that work for you in a reliable manner. C and C++ also provide access to all the low level platform specific power tools like the PAUSE ins, SSE/AVX/NEON etc. You can even tune your code layout through linker scripts! The power provided by the C/C++ tool chain is really unmatched by the JVM. Java is a great platform none the less, but I think it’s biggest advantage is that ordinary business logic (90% of your code?) can still depend on GC and safety features and make use of highly tuned and tested libraries written with Unsafe. This is a great trade-off between getting the last 5% of perf and being productive. A trade-off that makes sense for a lot of people but a trade-off none the less. Writing complicated application code in C/C++ is a nightmare after all.


& gt; Do not use garbadge collected languages.


Or, at least, “traditional” garbage collected languages. Because they are different – while Erlang too has a collector, it doesn’t creates bottlenecks because it doesn’t “stops the world” as Java while collecting garbage – instead it halts individual small “micro-threads” on a microsecond scale, so it’s not noticeable on the large.


Rewrite that to “traditional” garbage collection [i]algorithms[/i]. At LMAX we use Azul Zing, and just by using a different JVM with a different approach to garbage collection, we’ve seen huge improvements in performance, because both major and minor GCs are orders of magnitude cheaper.


There are other costs which offset that, of course: you use a hell of a lot more heap, and Zing isn’t cheap.


Reblogged this on Java Prorgram Examples and commented:


One of the must read article for Java programmers, it’s the lesson you will learn after spending considerable time tuning and developing low latency systems in Java in 10 minutes.


Reviving an old thread, but (amazingly) this has to be pointed out:


1) Higher level languages (eg Java) don’t elicit functionality from the hardware that isn’t available to lower level languages (eg C); to state that so-and-so is “completely impossible” in C while readily doable in Java is complete rubbish without acknowledging that Java runs on virtual hardware where the JVM has to synthesize functionality required by Java but not provided by the physical hardware. If a JVM (eg written in C) can synthesize functionality X, then so can a C programmer.


2) “Lock free” isn’t what people think it is, except almost by coincidence in certain circumstances, such as single core x86; multicore x86 cannot run lock free without memory barriers, which have complexities and cost similar to regular locking. As per 1 above, if lock free works in a given environment, it is because it is supported by hardware, or emulated/synthesised by software in a virtual environment.


Great Points Julius. The point I was trying (maybe unsuccessfully so) is that it’s prohibitively difficult to apply many of these patterns in C since they rely on GC. It goes beyond simply using memory barriers. You have to consider freeing memory as well which gets particularly difficult when you are dealing with lock free and wait free algorithms. This is where GC adds a huge win. That said, I hear Rust has some very interesting ideas around memory ownership that might begin to address some of these issues.


Como funcionam os sistemas comerciais.


Algorithmic automated trading or Algorithmic Trading has been at the centre-stage of the trading world for more than a decade now. The percentage of volumes attributed to algorithmic automated trading has seen a significant rise in the last decade. As a result, it has become a highly competitive market that is heavily dependent on technology. Conseqüentemente, a arquitetura básica de sistemas de negociação automatizados que executam estratégias algorítmicas sofreu grandes mudanças ao longo da última década e continua a fazê-lo. For firms, especially those using high frequency trading systems, it has become a necessity to innovate on technology in order to compete in the world of algorithmic trading, thus, making algorithmic trading field a hotbed for advances in computer and network technologies.


Nesta publicação, desmistificaremos a arquitetura por trás dos sistemas de negociação automatizada para nossos leitores. Comparamos a nova arquitetura dos sistemas de negociação automatizados com a arquitetura comercial tradicional e compreendemos alguns dos principais componentes por trás desses sistemas.


Traditional Architecture.


Any trading system, conceptually, is nothing more than a computational block that interacts with the exchange on two different streams.


Recebe dados de mercado Envia solicitações de pedidos e recebe respostas da troca.


Os dados de mercado que são recebidos geralmente informam o sistema do último livro de pedidos. Pode conter algumas informações adicionais, como o volume negociado até o momento, o último preço e quantidade negociada para um script. No entanto, para tomar uma decisão sobre os dados, o comerciante pode precisar analisar valores antigos ou derivar determinados parâmetros do histórico. Para atender a isso, um sistema convencional teria um banco de dados histórico para armazenar os dados do mercado e as ferramentas para usar esse banco de dados. A análise também envolveria um estudo das tradições passadas pelo comerciante. Daí, outro banco de dados para armazenar as decisões comerciais também. Por último, mas não menos importante, uma interface GUI para o comerciante visualizar todas essas informações na tela.


Todo o sistema comercial pode agora ser dividido em.


A troca (s) - o mundo externo O servidor Mercado Data receptor Comercializar dados do mercado Armazenar ordens geradas pelo usuário Aplicação Pegue as entradas do usuário, incluindo as decisões de negociação Interface para visualizar as informações, incluindo os dados e ordens Um gerente de pedidos enviando ordens para o troca.


New Architecture.


The traditional architecture could not scale up to the needs and demands of Automated trading with DMA. The latency between origin of the event to the order generation went beyond the dimension of human control and entered the realms of milliseconds and microseconds. So the tools to handle market data and its analysis it needed to adapt accordingly. O gerenciamento de pedidos também precisa ser mais robusto e capaz de lidar com mais pedidos por segundo. Since the time frame is so small compared to human reaction time, risk management also needs to handle orders in real time and in a completely automated way.


Por exemplo, mesmo que o tempo de reação para uma ordem seja de 1 milissegundo (o que é bastante comparado às latências que vemos hoje), o sistema ainda é capaz de fazer 1000 decisões comerciais em um único segundo. This means each of these 1000 trading decisions needs to go through the Risk management within the same second to reach the exchange. This is just a problem of complexity. Uma vez que a arquitetura agora envolve lógica automatizada, 100 comerciantes agora podem ser substituídos por um único sistema de negociação automatizado. Isso adiciona escala ao problema. So each of the logical units generates 1000 orders and 100 such units mean 100,000 orders every second. Isso significa que a tomada de decisão e a peça de envio de pedidos precisam ser muito mais rápidas do que o receptor de dados de mercado, de modo a combinar a taxa de dados.


Por isso, o nível de infra-estrutura que este módulo exige deve ser muito superior em comparação com o de um sistema tradicional (discutido na seção anterior). Hence the engine which runs the logic of decision making, also known as the ‘Complex Event Processing’ engine, or CEP, moved from within the application to the server. The Application layer, now, is little more than a user interface for viewing and providing parameters to the CEP.


The problem of scaling also leads to an interesting situation. Digamos que 100 lógicas diferentes estão sendo executadas em um evento de dados de mercado único (como discutido no exemplo anterior). However there might be common pieces of complex calculations that need to be run for most of the 100 logic units. For example, calculation of greeks for options. Se cada lógica funcionasse de forma independente, cada unidade faria o mesmo cálculo grega que iria desnecessariamente usar os recursos do processador. Para otimizar a redundância do cálculo, os cálculos redundantes complexos geralmente são mantidos em um mecanismo de cálculo separado que fornece os gregos como uma entrada para o CEP.


Although the application layer is primarily a view, some of the risk checks (which are now resource hungry operations owing the problem of scale), can be offloaded to the application layer, especially those that are to do with sanity of user inputs like fat finger errors. The rest of the risk checks are performed now by a separate Risk Management System (RMS) within the Order Manager (OM), just before releasing an order. The problem of scale also means that where earlier there were 100 different traders managing their risk, there is now only one RMS system to manage risk across all logical units/strategies. No entanto, algumas verificações de risco podem ser específicas para certas estratégias e alguns talvez precisem ser feitos em todas as estratégias. Daí o próprio RMS envolve, RMS de nível de estratégia (SLRMS) e RMS global (GRMS). It might also involve a UI to view the SLRMS and GRMS.


Emergência de protocolos para sistemas de negociação automatizados.


Com inovações, as necessidades são necessárias. Since the new architecture was capable of scaling to many strategies per server, the need to connect to multiple destinations from a single server emerged. Assim, o gerenciador de pedidos hospedou vários adaptadores para enviar pedidos para vários destinos e receber dados de várias trocas. Each adaptor acts as an interpreter between the protocol that is understood by the exchange and the protocol of communication within the system. Intercâmbios múltiplos significam adaptadores múltiplos.


However, to add a new exchange to the system, a new adapter has to be designed and plugged into the architecture since each exchange follows its protocol only that is optimized for features that the exchange provides. Para evitar esse incômodo de adição de adaptador, os protocolos padrão foram projetados. The most prominent amongst them is the FIX (Financial Information Exchange) protocol (see our post on introduction to FIX protocol). Isso não só torna gerenciável conectar-se a destinos diferentes, mas também reduzir drasticamente o mercado para quando se conectar a um novo destino. Para leitura adicional: Conectando o FXCM ao FIX, um tutorial detalhado.


The presence of standard protocols makes it easy to integrate with third party vendors, for analytics or market data feeds as well. As a result, the market becomes very efficient as integrating with a new destination/vendor is no more a constraint.


Além disso, a simulação torna-se muito fácil, pois receber dados do mercado real e enviar ordens para um simulador é apenas uma questão de usar o protocolo FIX para se conectar a um simulador. The simulator itself can be built in-house or procured from a third party vendor. Similarly recorded data can just be replayed with the adaptors being agnostic to whether the data is being received from the live market or from a recorded data set.


Emergence of low latency architectures.


Com os blocos de construção de um sistema de negociação algorítmica no local, as estratégias otimizadas na capacidade de processar enormes quantidades de dados em tempo real e tomar decisões comerciais rápidas. Mas com o advento de protocolos de comunicação padrão como FIX, a barreira de entrada de tecnologia para configurar uma mesa de negociação algorítmica, tornou-se menor e, portanto, mais competitivo. À medida que os servidores obtiveram mais memória e freqüências de clock mais altas, o foco mudou para reduzir a latência para a tomada de decisões. Ao longo do tempo, reduzir a latência tornou-se uma necessidade por muitas razões, como:


A estratégia faz sentido apenas em um ambiente de baixa latência. Sobrevivência dos mais aptos - os concorrentes escolhem você se você não for rápido o suficiente.


The problem, however, is that latency is really an overarching term that encompasses several different delays. Para quantificar todos eles em um termo genérico, geralmente não faz muito sentido. Embora seja muito fácil de entender, é bastante difícil quantificar. It, therefore, becomes increasingly important how the problem of reducing latency is approached.


If we look at the basic life cycle,


A market data packet is published by the exchange The packet travels over the wire The packet arrives at a router on the server side. O roteador encaminha o pacote pela rede do lado do servidor. The packet arrives on the Ethernet port of the server. Depending whether this is UDP/TCP processing takes place and the packet stripped of its headers and trailers makes its way to the memory of the adaptor. The adaptor then parses the packet and converts it into a format internal to the algorithmic trading platform This packet now travels through the several modules of the system – CEP, tick store, etc. The CEP analyses and sends an order request The order request again goes through the reverse of the cycle as the market data packet.


Alta latência em qualquer uma dessas etapas garante uma latência alta durante todo o ciclo. Hence latency optimization usually starts with the first step in this cycle that is in our control i. e, “the packet travels over the wire”. A coisa mais fácil de fazer aqui seria encurtar a distância até o destino, tanto quanto possível. Colocações são instalações fornecidas por trocas para hospedar o servidor de negociação nas proximidades da troca. O diagrama a seguir ilustra os ganhos que podem ser feitos cortando a distância.


For any kind of a high frequency strategy involving a single destination, Colocation has become a defacto must. However, strategies that involve multiple destinations need some careful planning. Vários fatores, como o tempo gasto pelo destino para responder pedidos de pedidos e sua comparação com o tempo de ping entre os dois destinos, devem ser considerados antes de tomar essa decisão. A decisão também pode depender da natureza da estratégia.


Network latency is usually the first step in reducing overall latency of an algorithmic trading system. No entanto, existem muitos outros locais onde a arquitetura pode ser otimizada.


Propagation latency.


A latência de propagação significa o tempo necessário para enviar os bits ao longo do fio, limitados pela velocidade da luz, é claro.


Several optimizations have been introduced to reduce the propagation latency apart from reducing the physical distance. For example, estimated roundtrip time for an ordinary cable between Chicago and New York is 13.1 milliseconds. Spread networks, in October 2012, announced latency improvements which brought the estimated roundtrip time to 12.98 milliseconds. Microwave communication was adopted further by firms such as Tradeworx bringing the estimated roundtrip time to 8.5 milliseconds. Note that the theoretical minimum is about 7.5 milliseconds. Continuing innovations are pushing the boundaries of science and fast reaching the theoretical limit of speed of light. Latest developments in laser communication, earlier adopted in defense technologies, has further shaved off an already thinning latency by nanoseconds over short distances.


Latência de processamento de rede.


Network processing latency signifies the latency introduced by routers, switches, etc.


The next level of optimization in the architecture of an algorithmic trading system would be in the number of hops that a packet would take to travel from point A to point B. A hop is defined as one portion of the path between source and destination during which a packet doesn’t pass through a physical device like a router or a switch. For example, a packet could travel the same distance via two different paths. But It may have two hops on the first path versus 3 hops on the second. Assuming the propagation delay is the same the routers and switches each introduce their own latency and usually as a thumb rule, more the hops more is the latency added.


A latência do processamento de rede também pode ser afetada pelo que chamamos de microbursas. Microbursts são definidos como um aumento súbito da taxa de transferência de dados que pode não afetar necessariamente a taxa média de transferência de dados. Since algorithmic trading systems are rule based, all such systems will react to the same event in the same way. Como resultado, muitos sistemas participantes podem enviar ordens que levam a uma onda repentina de transferência de dados entre os participantes e o destino que leva a um microburst. O diagrama a seguir representa o que é um microburst.


The first figure shows a 1 second view of the data transfer rate. Podemos ver que a taxa média está bem abaixo da largura de banda disponível de 1Gbps. However if dive deeper and look at the seconds image (the 5 millisecond view), we see that the transfer rate has spiked above the available bandwidth several times each second. Como resultado, os buffers de pacotes na pilha de rede, tanto nos pontos de extremidade da rede quanto nos roteadores e switches, podem transbordar. Para evitar isso, normalmente uma largura de banda muito superior à taxa média observada é geralmente alocada para um sistema de comércio algorítmico.


Serialization latency.


Serialization latency signifies the time taken to pull the bits on and off the wire.


Um tamanho de pacote de 1500 bytes transmitidos em uma linha T1 (1.544.000 bps) produziria um atraso de serialização de cerca de 8 milissegundos. However the same 1500 byte packet using a 56K modem (57344bps) would take 200 milliseconds. Uma linha Ethernet 1G reduziria essa latência para cerca de 11 microssegundos.


Interrupt latency.


Interrupt latency signifies a latency introduced by interrupts while receiving the packets on a server.


Interrupt latency is defined as the time elapsed between when an interrupt is generated to when the source of the interrupt is serviced. When is an interrupt generated? Interrupts are signals to the processor emitted by hardware or software indicating that an event needs immediate attention. O processador, por sua vez, responde suspendendo sua atividade atual, salvando seu estado e manipulando a interrupção. Whenever a packet is received on the NIC, an interrupt is sent to handle the bits that have been loaded into the receive buffer of the NIC. O tempo necessário para responder a esta interrupção não afeta apenas o processamento da nova carga útil, mas também a latência dos processos existentes no processador.


Solarflare introduced open onload in 2011, which implements a technique known as kernel bypass, where the processing of the packet is not left to the operating system kernel but to the userspace itself. The entire packet is directly mapped into the user space by the NIC and is processed there. Como resultado, as interrupções são completamente evitadas.


Como resultado, a taxa de processamento de cada pacote é acelerada. The following diagram clearly demonstrates the advantages of kernel bypass.


Application latency.


Application latency signifies the time taken by the application to process.


This is dependent on the several packets, the processing allocated to the application logic, the complexity of the calculation involved, programming efficiency etc. Increasing the number of processors on the system would in general reduce the application latency. Same is the case with increased clock frequency. Muitos sistemas de negociação algorítmica aproveitam a dedicação de núcleos de processadores para elementos essenciais da aplicação, como a lógica de estratégia, por exemplo. This avoids the latency introduced by the process switching between cores.


Similarly, if the programming of the strategy has been done keep in mind the cache sizes and locality of memory access, then there would be a lot of memory cache hits resulting further reduction of latency. To facilitate this, a lot of system use very low level programming languages to optimize the code to the specific architecture of the processors. Some firms have even gone to the extent of burning complex calculations onto hardware using Fully Programmable Gate Arrays (FPGA). With increasing complexity comes increasing cost and the following diagram aptly illustrates this.


Levels of sophistication.


O mundo do comércio algorítmico de alta freqüência entrou em uma era de competição intensa. With each participant adopting new methods of ousting the competition, technology has progressed by leaps and bounds. As arquiteturas de negociação algorítmica modernas são bastante complexas em comparação com as suas partes anteriores. Consequentemente, os sistemas avançados são mais caros de construir tanto em termos de tempo e dinheiro.


Conclusão:


This was a detailed post on algorithmic trading system architecture which we are sure gave a very insightful knowledge of the components involved and also of the various challenges that the architecture developers need to handle/overcome in order to build robust automated trading systems.


If you want to learn various aspects of Algorithmic trading then check out the Executive Programme in Algorithmic Trading (EPAT™). The course covers training modules like Statistics & Econometria, Computação Financeira e Tecnologia e Algorítmica e Negociação quantitativa. EPAT™ equips you with the required skill sets to build a promising career in algorithmic trading. Inscreva-se agora!


Posts Relacionados:


2 thoughts on “ How Trading Systems Function ”


15 de dezembro de 2017.


Postagem muito boa. Eu simplesmente tropecei em seu blog e queria dizer que eu realmente gostei de navegar em suas postagens no blog. After all I’ll be subscribing on your feed and I am hoping you write again very soon!


15 de dezembro de 2017.


Estamos realmente satisfeitos por você gostar de nossas postagens. A apreciação é o que nos mantém em pé.


We make sure to keep on adding fresh content periodically. Compartilhe nossas postagens e ajude-nos a espalhar a palavra sobre como as pessoas podem aproveitar a partir de negociação algorítmica e quantitativa.


Você melhor conhece sua terminologia de negociação de alta freqüência.


O aumento do interesse dos investidores na negociação de alta freqüência (HFT) é importante para os profissionais da indústria atuarem com a terminologia HFT. Uma série de termos HFT têm suas origens na rede de computadores / indústria de sistemas, o que é de se esperar, uma vez que a HFT é baseada em arquitetura de computador incrivelmente rápida e software de última geração. Nós discutimos brevemente abaixo de 10 termos principais de HFT que acreditamos serem essenciais para obter uma compreensão do assunto.


Colocação.


Localizando computadores pertencentes a empresas HFT e comerciantes proprietários nas mesmas instalações onde os servidores de um computador da troca estão alojados. Isso permite que as empresas HFT acessem os preços das ações uma fração de segundo antes do resto do público investidor. A co-localização tornou-se um negócio lucrativo para as trocas, que cobram às empresas HFT milhões de dólares pelo privilégio de "acesso de baixa latência".


Como Michael Lewis explica em seu livro "Flash Boys", a enorme demanda de co-localização é uma das principais razões pelas quais algumas bolsas de valores expandiram substancialmente seus centros de dados. Enquanto o antigo prédio da Bolsa de Valores de Nova York ocupava 46 mil pés quadrados, o centro de dados NYSE Euronext em Mahwah, Nova Jersey, é quase nove vezes maior, com 398 000 pés quadrados.


Flash Trading.


Um tipo de negociação de HFT em que uma troca irá "piscar" informações sobre compra e venda de pedidos de participantes do mercado para empresas HFT por algumas frações de segundo antes de a informação ser disponibilizada ao público. O comércio instantâneo é controverso porque as empresas da HFT podem usar esta vantagem de informações para negociar antes das ordens pendentes, o que pode ser interpretado como a execução da frente.


O senador dos Estados Unidos, Charles Schumer, havia instado a Comissão de Valores Mobiliários em julho de 2009 a proibir o comércio flash, dizendo que criou um sistema de duas camadas onde um grupo privilegiado recebeu tratamento preferencial, enquanto os investidores de varejo e institucionais foram colocados em desvantagem injusta e privados de um preço justo para suas transações.


O tempo decorrido a partir do momento em que um sinal é enviado para o recibo. Uma vez que a menor latência é igual à velocidade mais rápida, os comerciantes de alta freqüência gastam pesadamente para obter o hardware, o software e as linhas de dados mais rápidos do computador, de modo a executar ordens o mais rápido possível e ganhar uma vantagem competitiva na negociação.


O maior determinante da latência é a distância que o sinal tem para viajar ou o comprimento do cabo físico (geralmente fibra óptica) que transporta dados de um ponto para outro. Uma vez que a luz no vácuo viaja a 186.000 milhas por segundo ou 186 milhas por minuto, uma empresa HFT com seus servidores co-localizados dentro de uma troca teria uma latência muito menor - e, portanto, uma vantagem comercial - do que uma empresa rival localizada a quilômetros de distância .


Curiosamente, os clientes de co-localização de uma troca recebem a mesma quantidade de comprimento do cabo independentemente de onde estão localizados dentro das instalações de troca, de modo a garantir que tenham a mesma latência.


Rebates de liquidez.


A maioria das trocas adotou um "modelo criador de fabricantes" para subsidiar a provisão de liquidez de ações. Neste modelo, os investidores e os comerciantes que colocam ordens limitadas geralmente recebem um pequeno desconto da bolsa após a execução de suas ordens, porque são considerados como tendo contribuído para a liquidez no estoque, ou seja, são "fabricantes" de liquidez.


Por outro lado, aqueles que colocam ordens de mercado são considerados "compradores" de liquidez e são cobrados uma taxa modesta pela troca por suas ordens. Embora os descontos sejam tipicamente frações de um centavo por ação, eles podem somar valores significativos sobre os milhões de ações negociadas diariamente por comerciantes de alta freqüência. Muitas empresas HFT empregam estratégias de negociação especificamente projetadas para capturar o máximo possível de descontos de liquidez.


Motor correspondente.


O algoritmo de software que forma o núcleo do sistema comercial de uma troca e combina continuamente com ordens de compra e venda, uma função anteriormente realizada por especialistas no piso de comércio. Uma vez que o motor de correspondência corresponde a compradores e vendedores para todas as ações, é de vital importância para assegurar o bom funcionamento de uma troca. O motor correspondente reside nos computadores da troca e é a principal razão pela qual as empresas HFT tentam estar tão perto dos servidores de troca quanto possível.


Refere-se à tática de ingresso em pequenos pedidos comercializáveis ​​- geralmente para 100 ações - para conhecer grandes encomendas ocultas em piscinas ou trocas escuras. Enquanto você consegue pensar em fazer ping como sendo análogo a um navio ou submarino que envia sinais de sonar para detectar obstruções próximas ou vasos inimigos, no contexto HFT, o ping é usado para encontrar "presas" escondidas.


Veja como - as empresas de compra usam sistemas de negociação algorítmica para dividir grandes encomendas em mais pequenos e alimentá-los firmemente no mercado, de modo a reduzir o impacto no mercado de grandes pedidos. Para detectar a presença de grandes encomendas, as empresas HFT colocam ofertas e oferece em lotes de 100 partes para cada estoque listado.


Uma vez que uma empresa obtém um "ping" (ou seja, a pequena ordem do HFT é executada) ou uma série de pings que alertam o HFT para a presença de uma grande ordem de compra, pode se envolver em uma atividade comercial predatória que garanta um risco quase - lucro livre à custa do comprador, que acabará recebendo um preço desfavorável por sua grande ordem. Pinging foi comparado a "baiting" por alguns jogadores influentes do mercado, já que seu único objetivo é atrair instituições com grandes pedidos para revelar sua mão.


Ponto de presença.


O ponto em que os comerciantes se conectam a uma troca. Para reduzir a latência, o objetivo das empresas HFT é chegar o mais próximo possível do ponto de presença. Veja também "Co-localização".


Negociação Predatória.


Práticas de negociação empregadas por alguns comerciantes de alta freqüência para fazer lucros quase sem risco à custa dos investidores. No livro de Lewis, a troca de IEX, que busca combater alguns dos mais astutos HFT, identifica três atividades que constituem comércio predatório:


"Arbitragem de mercado lento" ou "arbitragem de latência", em que um comerciante de alta freqüência arbitra diferenças de preços de ações entre várias trocas. "Electronic front running", que envolve uma corrida da empresa HFT antes de uma grande ordem do cliente em uma troca, acumulando todas as ações oferecidas em várias outras trocas (se é uma ordem de compra) ou acertando todas as licitações (se for uma ordem de venda), e depois virar e vendê-los (ou comprá-los) do cliente e embolsar a diferença. "Rebate arbitrage" envolve a atividade HFT que tenta capturar descontos de liquidez oferecidos pelas trocas sem realmente contribuir para a liquidez. Veja também "Reembolsos de liquidez".


Processador de informações de valores mobiliários.


A tecnologia usada para coletar cotações e trocar dados de diferentes intercâmbios, reunir e consolidar esses dados e divulgar continuamente cotações e negócios de preços em tempo real para todos os estoques. O SIP calcula o National Best Bid and Offer (NBBO) para todas as ações, mas devido ao grande volume de dados que tem para lidar, tem um período de latência finita.


A latência de um SIP no cálculo do NBBO é geralmente superior à das empresas HFT (por causa dos computadores mais rápidos e co-localização), e essa diferença de latência - estimada por Lewis ocasionalmente atingir até 25 milissegundos - isso é no centro da atividade predatória de HFT. Nasdaq OMX Group e NYSE Euronext executam cada um SIP em nome das 11 bolsas nos EUA.


Smart Routers.


Tecnologia que determina a que ordens ou trocas são enviadas. Os roteadores inteligentes podem ser programados para enviar peças de grandes pedidos (depois que eles são quebrados por um algoritmo de negociação) para obter uma execução econômica econômica. Um roteador inteligente como um roteador econômico seqüencial pode direcionar uma ordem para um pool escuro e, em seguida, para uma troca (se não for executada no primeiro), ou para uma troca onde é mais provável que receba um desconto de liquidez.


The Bottom Line.


HFT tem feito ondas e penas ruffling (para usar uma metáfora mista) nos últimos anos. Mas, independentemente da sua opinião sobre o comércio de alta freqüência, familiarizar-se com esses termos HFT deve permitir que você melhore sua compreensão sobre esse tema controverso.

No comments:

Post a Comment