[Vietnamese] Làm thế nào để viết phần mềm diệt vi-rút – B.2 Cơ chế bảo vệ realtime, chức năng bảo vệ 24/7 cho máy tính

Bài trước chúng ta đã nói về cách quét và kiểm tra xem file có phải là vi-rút hay không bằng mã hash. Ở bài này chúng ta sẽ tìm hiểu về cơ chế bảo vệ realtime trong av ra sao. Ở bài này tui sẽ chỉ nói về cơ chế bảo vệ thời gian thực liên quan đến việc truy xuất file.

Chúng ta thường thấy cụm từ bảo vệ thời gian thực (realtime protection) trong các phần mềm av và cũng mường tưởng ra đôi chút về tính năng của nó giờ chúng ta sẽ cùng tìm hiểu cơ chế và cách viết tính năng realtime cho av.

Cơ chế bảo vệ realtime là gì tại sao nên bật nó? Thông thường sao các lần quét vi-rút toàn bộ hệ thống thì các av đã đảm bảo máy tính cho an toàn một phần rồi nhưng do quá trình sử dụng chúng ta thường hay download file, copy file từ ổ usb, cd/dvd hoặc thẻ nhớ, truy cập các thư mục chia sẻ trong công ty/trường học, cài đặt các phần mềm,… cho đến hiện tại đó là tác nhân gây lây nhiễm vi-rút lớn nhất cho máy tính. Dựa trên cơ chế lây nhiễm mà realtime được sinh ra để đảm bảo mỗi khi có file được truy cập (download, copy, thực thi,… ) sẽ luôn được kiểm tra xem có bị nhiễm vi-rút hay không và thông qua đó bảo vệ an toàn cho máy tính.

Vậy cách đơn giản nhất để làm cơ chế bảo vệ realtime như thế nào? Trước khi trả lời câu hỏi thì chúng ta thống nhất đặt tên cho tính năng bảo vệ realtime là realtime engine (và giả sử nó chạy trên hệ điều hành Windows). Để tạo ra realtime engine thì đơn giản chúng ta chỉ cần dùng cơ chế hook để hook các hàm liên quan đến truy cập file như createfile, openfile, writefile, createprocess, movefile… là chúng ta có thông tin được truy cập và dùng tính năng quét bằng hash trong bài trước để kiểm tra xem file được truy cập có bị nhiễm vi-rút hay không. Việc hook các hàm này diễn ra ở user-mode nên sẽ có nhiều hạn chế nên thường chúng ta chỉ dùng với mục đích cá nhân và học hỏi là chính.

Cơ chế hook có nhiều hạn chế vậy nếu muốn bảo vệ tốt hơn thì làm thế nào? Để có thể chặn và kiểm tra được mọi truy cập đến file thì chúng ta phải dùng các driver chạy ở tầng kernel-mode. Một trong các kiểu driver chúng ta có thể dùng là filter driver. Việc viết driver là tương đối khó, đòi hỏi chúng ta có hiểu biết nhất định về kiến trúc hệ điều hành Windows. Nếu bạn không muốn tốn quá nhiều công sức vào việc viết driver thì may mắn cho bạn là có các hãng thứ ba cung cấp driver dạng này và chúng ta chỉ cần dùng API ở tầng User-mode để làm việc với driver.

Realtime engine đơn giản thế thì tại sao lại khó mà không phải hãng nào cũng làm tốt? Việc chặn truy cập file và quét kiểm tra đơn giản là làm gia tăng độ trễ của việc load file của hệ thống và các ứng dụng (với rất nhiều phần mềm av bạn sẽ thấy máy tính chạy nhanh một cách đáng kể khi bạn tắt realtime đi) Và thường thì một ứng dụng được chạy (load lên memory) thì kéo theo việc load tầm vài chục đến cả trăm thư viện khác. Giả sử mỗi file bạn chặn để quét và kiểm tra mất 5-50ms thì với con số file được load như đã nêu thì bạn có thể ước lượng được nó chậm ra sao. Với một số ứng dụng, ứng dụng dạng dịch vụ, COM,… thì việc load chậm sẽ làm cho lỗi và có thể treo ứng dụng và hệ thống. Và việc xử lý không tốt trong engine mà gây lỗi có thể khiến máy tính treo hoặc BSOD (lỗi màn hình xanh). Dó đó việc làm realtime engine đòi hỏi phải tính toán và test cẩn thận trước khi đưa ra.

Vậy làm sao để tối ưu realtime engine khi nó làm máy tính chạy quá chậm? Ngoài các kỹ thuật tối ưu như đã nêu ở bài trước thì các bạn cần phải có thêm một phần mới gọi là cache. Đối với file sạch (không bị nhiễm vi-rút) sau mỗi lần quét ta có thể lưu lại và coi như một local DB chứa các file sạch, trước khi quét ta cần kiểm tra xem file cần quét có nằm trong DB sạch không nếu không thì ta mới tiến hành quét còn có thì ta có thể tiến hành bỏ qua. Việc cache này nếu có đủ thông tin về file sẽ có thể giúp chúng ta dễ dàng giảm độ trễ xuống <1ms cho mỗi file (cho một cấu hình máy tính bình thường dùng ổ HDD).

Vậy cách cache thông tin file ra sao? Trường hợp này sẽ ưu tiên cho tốc độ nên ta thường không lưu hash được cho cả file. Đơn giản chúng ta có thể tính hash đường dẫn + phần đầu + phần cuối file + file size + date modified để làm thông tin so sánh. Mã hash ở đây để có dùng là CRC32/64. Bạn có thể dùng binary search tree để quản lý tìm kiếm trên DB Cache. Các file trên hệ thống thường không cố định (có thể bị xóa hoặc để sang thư mục khác)  nên bạn sẽ phải có timeout cho cache của từng file tránh làm rác DB cache dẫn đến tốn memory và giảm hiệu xuất tìm kiếm.

Vậy khi cập nhật mẫu mới mà trùng với file sạch trong DB cache thì sao? Để giải quyết vấn đề này ta có thể lưu thêm hash của file sạch để lọc ra khỏi DB cache sau mỗi lần cập nhật mẫu.

Kết luận: Vậy là ta có thể hình dung ra cách làm một realtime engine đơn giản cho av ra sao.

Bài tiếp: Làm thế nào để quét và kiểm tra các dòng lây file?

>VVM.

 

 

Advertisements

WhatsApp Partners With Open WhisperSystems To End-To-End Encrypt Billions Of Messages A Day

Good news for people who love privacy and security! Bad news for black-hat hackers and government surveillance agencies. WhatsApp, the wildly popular messaging app, has partnered with the crypto gurus at Open WhisperSystems to implement strong end-to-end encryption on all WhatsApp text messages–meaning not even Zuck himself to pry into your conversations, even if a court order demands it.

WhatsApp, of course, has hundreds of millions of daily users worldwide, and was purchased by Facebook earlier this year for an eye-popping $19 billion. Open WhisperSystems creates state-of-the-art end-to-end encryption systems such as Signal, for voice, and TextSecure, for text. (They have nothing to do with the messaging app Whisper.) As per the EFF’s Secure Messaging Scorecard, TextSecure is a huge upgrade from WhatsApp’s previous encryption.

And I do mean huge. Right now the TextSecure protocol has only rolled out to WhatsApp for Android–other clients, and group/media messaging, are coming soon–but already “billions of encrypted messages are being exchanged daily … we believe this already represents the largest deployment of end-to-end encrypted communication in history,” to quote OWS founder Moxie Marlinspike.

This comes in the wake of Apple, similarly, encrypting iOS 8 devices such that Apple cannot retrieve data stored on them (albeit in a closed-source, unverifiable way.) FBI director James Comey subsequently claimed “the post-Snowden pendulum” has “gone too far” amid concerns that the world is “going dark” to wiretaps and surveillance. These and other three-letter agencies–along with black-hat hackers and governments worldwide, thanks to WhatsApp’s immense global reach–won’t be happy about their new inability to pry into the contents of WhatsApp text messages.

However, Comey’s claims appear to suffer from the slight disadvantage of being completely false:

and I think the tech industry’s response to the FBI and NSA’s dismay at the widespread use of end-to-end encryption by more and more companies, and the notion that the tech industry is “picking a fight” with them, can be summed up as:

And it’s fair to say that, in a world where surveillance technology seems to grow more powerful and pervasive every week, any meaningful blow that can be struck for privacy and anonymity is a welcome rebalancing. Kudos to WhatsApp for making this happen–Marlinspike, who approached them with the idea, stresses how impressed he’s been by their eagerness, dedication, and thoroughness–and to Open WhisperSystems for making it possible.

Marlinspike says OWS’s own goal is to keep producing strong, open-source privacy and security tools that companies (and individuals) can easily incorporate into their own services, without having to do their own crypto research and/or protocol design. They’ll keep working on Signal and TextSecure as reference implementations, using them to push the envelope and prove new ideas; and in the meantime, in one fell swoop, their TextSecure protocol suddenly has hundreds of millions of daily users, more than all but a tiny handful of companies.

[Vietnamese] Làm thế nào để viết phần mềm diệt vi-rút – B.1 Quét file bằng mã hash

Giới thiệu: Tui (VVM) làm phần mềm diệt vi-rút (hay còn gọi là anti-virus software – gọi tắt là av) trong khoảng độ 6 năm cho bên CMC InfoSec. Hiện tại tui không còn làm về av nhưng khi nói chuyện và trao đổi với các bạn trong ngành IT thì phần lớn các bạn không hình dung ra cách làm av và nghĩ là làm av rất khó vì vậy tôi viết loạt bài về “Làm thế nào để xây dựng phần mềm diệt vi-rút” để các bạn có thể hình dung ra cách xây dựng một av (mức đơn giản) như thế nào.

AV ngay từ thời cổ xưa của nó vốn khởi chỉ là quét file và kiểm tra xem file đó có phải là vi-rút hay không. Và đến ngày nay tính năng quét file vẫn là tính năng chính và cơ bản nhất của một av. Tui cùng các bạn chúng ta bắt đầu với những câu hỏi.
Vậy làm sao ta biết một file có phải là vi-rút hay không? Có nhiều cách nhưng tui xin chỉ ra một cách cơ bản nhất và phổ biến nhất để nhận biết 1 file có phải là vi-rút hay không, đó là so sánh mã hash (vd: MD5, SHA1, SHA2,…) của file ta cần kiểm tra và tập mẫu vi-rút ta có, nếu 2 mã hash đó trùng nhau thì ta có thể khẳng định đó là file chứa vi-rút và nếu là file không can thiệp sâu vào hệ thống thì đơn giản ta chỉ cần xóa các file chứa vi-rút đi là coi như đã an toàn.

  *hash: Là dạng mã độc nhất được sinh ra từ nội dung nhất định. Tức là mỗi nội dụng khác nhau ta sẽ có một mã hash khác nhau (theo lý thuyết vẫn có tỷ lệ trùng lặp nhất định nhưng rất nhỏ nên ta có thể bỏ qua).
Câu hỏi đặt ra là làm sao ta có mẫu vi-rút? Các nguồn cung cấp mẫu phổ biến là từ diễn đàn chuyên ngành, từ các tổ chức chuyên ngành, từ các dịch vụ như virustotal.com, từ chương trình trao đổi mẫu giữa các hãng phần mềm diệt vi-rút, và từ quá trình thu nhập mẫu từ phía người dùng.

Giả sử bạn có tầm vài trăm đến tầm vài nghìn mẫu vi-rút rồi thì làm thế nào tìm kiếm trong tập mẫu đó? Cách đơn giản nhất là bạn đem so mã hash của file với từng mã hash của các mẫu vi-rút bạn có, nhưng tốt hơn là bạn nên áp dụng các thuật toán sắp xếp đơn giản (vd: Quick Sort, Bubble Sort, Merge Sort,…) và dùng thuật toán binary search để tìm kiếm.

Câu hỏi dẫn thêm là làm thế nào để so sánh hiệu quả với tập mẫu tầm vài triệu mẫu trở lên (đôi khi là cả chục triệu mẫu)? Để trả lời câu hỏi này thì ta bắt đầu phải dùng đến bài toán tối ưu, nhưng trong trường hợp cơ bản nhất thì các bản có thể dùng binary search tree (cây nhị phân tìm kiếm) và rất nhiều phần mềm cũng tận dụng thuật toán này để xây dựng database (DB) cho riêng cho mình. Và nếu dùng binary search tree thì bạn nên dùng self-balancing binary search tree (red-black tree là một dạng đó) hoặc không nếu bạn chỉ làm để tìm hiểu hoặc cung ứng dạng dịch vụ online/cloud thì các bạn có thể dùng hẳn DB (RocksDB, Redis, Riak, MongoDB, PostgresSQL,…) để khỏi phải mất công viết và thử nghiệm DB.

Đó là với một file còn cả một thư mục hay ổ đĩa với cả trăm nghìn (đôi khi là triệu) file + kích thức file khác nhau thì làm thế nào? Thứ nhất về việc quét nhiều file thì các bạn có thể mở nhiều thread/process để duyệt và kiểm tra file (tối ưu hơn thì các bạn có thể phân luồng công việc (vd: thread thì quét file, thread thì tính hash, thread thì tìm kiếm và so sánh với DB,…), và tối ưu cho các thread/process dựa trên tài nguyên của hệ thống). Thứ hai với file lớn thì các bạn có thể phân làm 2 giai đoạn: giai đoạn một chỉ quét vài KByte đầu để kiểm tra xem dữ liệu có trong DB mẫu hay không và nếu có chuyển qua giai đoạn 2 là lấy thêm hash + file size hoặc là tính hash toàn file để chắc chắn là trùng mẫu trong DB (với cách này thì mẫu vi-rút dung lượng lớn các bạn cũng phải làm tương tự).

Vậy ta còn có thể tối ưu được nữa không? Vâng ta vẫn còn có thể tối ưu thêm được nữa . Nếu các bạn hiểu các CPU tính toán thì sẽ biết là CPU sẽ tính toán nhanh với các phép tính dạng số nguyên và nếu tính toán tốt để hạn chế sự trùng lặp thì các bạn có thể dùng CRC32/CRC64 kết hợp với file size để kiểm tra hash một cách nhanh chóng hơn MD5/SHA1/SHA2 tương đối. Ngoài ra cũng nên tận dụng các kỹ thuật tối ưu về cấp phát bộ nhớ (memory), các dạng thuật toán hỗ trợ lock-free/lockless, inline function để có tăng tốc độ tính toán. Và nếu sau này ta có gắn thêm Realtime Engine thì ta còn có thể tận dụng làm tăng tốc độ quét nhanh lên tương đới nữa (bằng cách lưu lịch sử trạng thái file,…).

Vậy còn DB mẫu vi-rút thì quản lý thế nào? Nếu là DB bạn tự xây dụng thì đây cũng là một bài toán cần cân nhắc và tính toán kỹ. DB mẫu có thể tổ chức dưới định dạng nhất định để av sau khi cập nhật về có thể load ngay lên mà không cần thêm công đoạn import,… . Việc thêm và loại bỏ mẫu vi-rút cũng là việc diễn ra thường xuyên, và để hạn chế việc download lại phần không cần nhiết nhiều thì các bạn có thể chia nhỏ DB mẫu thành nhiều tập con dưới dạng các file khác nhau (có thể phân loại dựa theo thời gian, mức độ nguy hiểm, nguồn mẫu,…). Và cũng nói thêm là DB mẫu thì thường rất nhẹ nên chỉ chứa thông tin về mã hash của vi-rút, tên virus (do cách bạn tự đặt hoặc mượn từ các nguồn khác), mức độ nguy hiểm và id để truy vấn thông tin thêm khi cần.

Kết luận: Vậy ta đã biết cách dùng hash để kiểm tra một file có phải là vi-rút hay không. Về quét vi-rút lây file hay còn gọi là vi-rút đa hình là trường hợp đặc biệt tui sẽ nói trong các bài sau.

Bài kế tiếp: Cơ chế bảo vệ realtime, chức năng bảo vệ 24/7 cho máy tính

>VVM.

Posted in AV

World’s First 1,000-Processor Chip Said to Show Promise Across Multiple Workloads

kilocore_chip-300x248

A blindingly fast microchip, the first to contain 1,000 independent processors and said to show promise for digital signal processing, video processing, encryption and datacenter/cloud workloads, has been announced by a team at the University of California, Davis. The “KiloCore” chip has a maximum computation rate of 1.78 trillion instructions per second and contains 621 million transistors, according to the development team, which presented the microchip at the 2016 Symposium on VLSI Technology at Circuits this month in Honolulu.

By way of comparison, if the KiloCore’s area were the same as a 32 nm Intel Core i7 processor, it would contain approximately 2300-3700 processors and have a peak execution rate of 4.1 to 6.6 trillion independent instructions per second, according to the design team.

Although Bevan Baas, professor of electrical and computer engineering at UC/Davis who led the chip architectural design team, told HPCwire’s sister publication, EnterpriseTech, that there are no current plans to commercialize the processor, he said it has important commercial implications, with several applications already developed for the chip.

Bevan-Baas

“It has been shown to excel spectacularly with many digital signal processing, wireless coding/decoding, multimedia and embedded workloads, and recent projects have shown that it can also excel at computing kernels for some datacenter/cloud and scientific workloads,” he said. He said the KiloCore innovates in a number of areas covering architectures, application development and mapping, circuits, and VLSI design.

“We hope multiple aspects of KiloCore will influence the design of future computing systems,” he said. “For workloads that can be mapped to its architecture, it could very well have a place in exascale-class computing.”

The design team claims for KiloCore the highest clock-rate processor ever designed in a university. And while other multiple-processor chips have been created, none exceed about 300 processors, according to the team.

The KiloCore chip was fabricated by IBM using their 32 nm CMOS technology.

Beyond throughput performance, Baas said KiloCore also is the most energy-efficient many-core processor ever reported, Baas said. For example, the 1,000 processors can execute 115 billion instructions per second while dissipating only 0.7 Watts, low enough to be powered by a single AA battery. The KiloCore chip executes instructions more than 100 times more efficiently than a modern laptop processor.

Each processor core can run its own small program independently of the others, Baas explained, which he said is a fundamentally more flexible approach than Single-Instruction-Multiple-Data approaches utilized by processors such as GPUs. The idea is to break an application up into many small pieces, each of which can run in parallel on different processors, enabling high throughput with lower energy use.

The KiloCore architecture is an example of a “fine-grain many-core” processor array, Baas said. Processors are kept as simple as possible so they occupy a small chip area, with numerous cores per chip. “Short low-capacitance wires result in high efficiency,” he said, and “operate at high clock frequencies (high performance in terms of high throughput and low latency).” The cores dissipate low power when both active and idle – in fact, he said, they dissipate perfect zero active power when there is no work to do. Energy efficiency also is achieved by operation at low supply voltages and a relatively-simple architecture consisting of a single-issue 7-stage pipeline with a small amount of memory per core and a message-passing-based inter-processor interconnect rather than a cache-based shared-memory model.

Baas said the team has completed a compiler and automatic program mapping tools for use in programming the chip

(via HPCWire.com)

Logging for Unikernels

logging-for-unikernelsUnikernels are the next step in virtualized computing. There is a lot of hubbub right now in the tech-o-sphere about unikernels. In fact, in many circles unikernels are thought to be the Next Big Thing. There is a lot of recent news to justify this perspective. Docker, a key player in Linux Containers, has acquired Unikernel Systems. MirageOS, the library that you use to create unikernels has solid backing from Xen and the Linux Foundation as an incubator project.

Unikernels are not going anyway soon. Having a clear understanding of unikernel technology is important to anybody working in large scale, distributed computing. Thus, I share. In this article I am going to explain the basic concepts behind unikernels and I am going to describe how they sit in the landscape of virtualized computing. Also I am going to talk about an approach to logging from within unikernels that I learned from industry expert, Adam Wick, Ph.D..

So let’s begin.

The Path From Virtual Machines to Unikernels.

A unikernel is a unit of binary code that runs directly on a hypervisor, very much in the same way that a virtual machine (VM) runs on a hypervisor. However, there is a difference. When you create a virtual machine, you are for all intents and purposes creating an abstract computer. As with any computer, your virtual machine will have RAM, disk storage, CPUs and network I/O. (Please see Figure 1, below.)

VirtualMachineArchitecture

When you create a virtual machine, the resources you allocate from the host computer are static. For example, as shown above, when you create a VM with 16GB of RAM, a 4 core CPU and 1 TB of storage, that configuration is set. The VM owns the resources and will not give them back. Also, the operating system on a VM can be different than the OS on the host. For example, if your host computer is using Ubuntu, creating a VM that uses Windows is no problem at all.

The important thing to remember is that a VM is a full blown computer with dedicated resources. However, there is a drawback. At runtime, should your 4 core, 16 GB VM use only 1 CPU, 1 GB of RAM, the other resources will sit around unused. Remember, there’s no giving back. Your VM owns the resources whether it uses them or not. Thus, there is the possibility that you can be paying for computing that is never used. Inefficient use of resources is a very real hazard on the Virtual Machine landscape.

Enter Unikernels

While full blown VMs may be useful for some work, there are situations where you don’t need all the overhead that a VM requires. Let’s say you publish a web service that makes a recommendation of stock to buy given hourly activity on the Internet. What do you need? You can get by with threading capability, a web client, a web server and some code that provides analysis intelligence. That’s it. This is where a unikernel comes in.

As mentioned above, a unikernel is a unit of binary code that runs directly on a hypervisor. Think of a unikernel as an application that has been pulled into the operating system’s kernel. However, a unikernel is a special type of application. It’s compiled code that contains not only application code, but also the parts of the operating system that are essential to its purpose. A unikernel thinks it is the only show in town. And because it is compiled, you don’t have to go through a whole lot of installation hocus pocus. It’s a single file. (Please see Figure 2, below.)

UnikernalArch

Unikernels are very small. For example, a DNS server weighs in at around 449 KB; a web server at ~674 KB. A unikernel contains only the pieces of the operating system required, no more, no less. There are no unnecessary drivers, utilities or graphical rendering components.

Unikernels load into memory very fast. Start times can be measured in milliseconds. Thus, they are well suited to provide transient microservice functionality. A transient microservice is a microservice that is loaded and unloaded into memory to meet an immediate need. The service is not meant to have a long lifespan. It’s loaded, work gets done and then the service is terminated

Given the small size of a unikernel and the fast loading speed, you can deploy thousands of them on a single machine, bringing them on and offline as the need requires.. And because they are compiled, binary artifacts, there are less susceptible to security breaches provided the base compilation is clean.

A unikernel can be assigned a unique IP address. A unique IP address means that the unikernel is discoverable on Internet. The implication is that you can have thousands of unikernels running on a machine each communicating with each other via standard network protocols such as HTTP.

What about Containers?

A container is a virtualization technology in which components are assembled into a single deployment unit called a container. You do not have to use yum or apt-get to install application dependencies. Everything the container needs exists in the container.

A container is an isolated process. Thus, conceptually a container is like a VM in that it thinks that it’s the only show in town.

A container leverages the operating system of the host computer. Hence, there is no mixing and matching. You cannot have a Windows host computer running a Linux container.

Similar to a unikernel a container uses only the resources it needs. They are highly efficient.

Unlike a VM and a unikernel which are managed by a hypervisor, containers are managed by a container manager. Docker and Kubernetes are popular container managers.

You can read more about containers here.

 

As mentioned above, a microservice lives behind an IP address that makes it available to other services. Typically the microservice provides a single piece of functionality that is stateless. Unlike VMs and Containers that can provide their own storage and authentication mechanisms, the architectural practice for unikernels is to have a strong boundary around the unikernel’s area of concern. The best practice is to have a unikernel use other services to meet needs beyond its concern boundary. For example, going back to the stock analysis service we described above, the purpose of that unikernel is to recommend stocks. If we want to make sure that only authorized parties are using the service we’d make it so our Stock Recommender unikernel uses an authorization/authentication service to allow users access to the service. (Please see Figure 3, below.)

UnikernalAuthSvc

Unikernel technology is best suited for immutable microservices. The internals of a unikernel are written in stone. You do not update parts; there are not external parts. When it comes time to change a unikernel, you create a new binary and deploy that binary.

So far, so good? Excellent! Now that you have an understanding about the purpose and nature of unikernels, let’s move onto logging from within a unikernel.

The Importance of Logging

A unikernel is a compiled binary similar to low level compilation unit such as a C++ executable. Therefore, as of this writing, debugging is difficult. Whereas in interpreted languages such as Java and C#,  where you can decompile the deployment unit, (class/jar for Java, dll/exe for C#)  to take a look at the code and debug it if necessary, in a unikernel things are locked up tight. If anything goes wrong, there is no way to peek inside. The only way you have to get a sense of what is going on or what’s gone wrong is to read the logs. So, the question becomes how do you log from within a unikernel?

The answer is, carefully. I know, it’s a smartass answer. But it’s an answer that’s appropriate. Unikernel technology is still emerging. There is a lot of activity around unikernel loggers, but as of this writing no off-the-shelf technology on the order of Log4J, Log4Net or NLog have come forward. However, there are some people working in the space and I had the privilege to talk to one of them, Adam Wick, Ph.D.

Adam is Research Lead at Galois, Inc. The folks at Adam’s company solve big problems for organizations such as DARPA and the Department of Homeland Security. Galois does work that is strategic and complex.

Adam has been working with unikernels for a while and shared some wisdom with me. The first thing that Adam told me is that they stick to the Separation of Concerns mantra that I described above. The unikernels they implement do only one thing and when it comes to logging they write loggers that do no more then send log entries to a log service. All their loggers do is create log data. Storing log data and analysis is done by a third party service. (Logentries is one the logging services that Adam tests his logger against.) (Please see Figure 4, below.)

UnikernalsLoggingSvc

In terms of creating the logger for the unikernel, Adam acknowledges that at the moment, it’s still pretty much a roll your own affair. Nonetheless, Adam recommends that no matter what logger you create, you will do well to support the syslog protocol. To quote Adam,

“syslog is a tremendously easy protocol to implement. You basically just need to prepare a simple string in the right format and send it to the right place. Not a lot of tricky bits.”

The syslog protocol defines a log entry format that supports a predefined set of fields. The protocol has gone through a revision from RFC1364 to RFC5424, and there are some alternative versions out there. So there is a bit of confusion around the implementation. Still, there are usual fields that you can plan to support when implementing log emission. Table 1 below describes these usual fields.

Table 1: The fields of the Syslog protocol

Field Description Example
Facility Refers to the source of the message, such as a hardware device, a protocol, or a module of the system software. Log Alert, Line Printer subsystem
Severity The usual severity tags Debug, Informational, Notice, Warning, Error, Critical, Alert, Emergency
Hostname The host generating the entry 198.162.0.1, MyHostName
Timestamp The timestamp is the local time, in MMM DD HH:MM:SS format
Message A free form message that provides additional information relevant to the log entry “I am starting up now.”

Once you get logging implemented in your unikernel, the question becomes what to log. Of course you’ll log typical events that are important to the service your unikernel represents. For example, if your microservice does image manipulation, you’ll want to log when and how the images are processed. If your unikernel is providing data analysis, you’ll want to log the what and when events relevant to the analysis.

However, remember that your unikernel will be one of thousands sitting in a single computer. More is required. To quote Adam Wick again:

“… the problem that we run into is not logging enough information.

I suspect this is true of many systems — in fact, I know I’ve seen at least a few conference talks on it — but understanding complex distributed systems is just a very hard problem. The more information you have to understand why something failed, or why something bogged down, the better. For example, in some of our early work, we didn’t log a very simple message: a “hello, I’m booted and awake!” message. This turned out to be a problem, later, because we were seeing some weird behavior, and it wasn’t clear if it was because of some sort of deep logic problem, or if it was just because our DHCP server was slow.

In hindsight this is probably obvious, but having information like ‘I’m up and my IP configuration is … ‘ and  ‘I’m about to crash.’ have proved to be very useful in a couple different situations. Most high-level programming languages have a mechanism to catch exceptions that occur. Catching and sending them out on a log seems obvious, but we did have to remember to make it a practice.”

Another area that Adam covered is ensuring storage availability of your log data. Remember, your unikernel will not store data, only emit it. Thus, as we’ve mentioned, we need to rely upon a service to store the data for later analysis. Logentries allows you to store your data for days or months so you can quickly analyze it  and make alerts on your unikernel events. If you need to store the data for longer, you can use Logentries S3 archiving features to maintain a backup of your log data for years (or indefinitely). And, not only  does Logentries provide virtually unlimited storage for an unlimited amount of time, but the service is highly available. It’s always there at the IP endpoint. Internally, if a node within Logentries goes down, the system is designed to transfer traffic to another node immediately without any impact to the service consumer. High availability of the logging service is a critical. Remember, the logging service is all your unikernel has to store log data.

Given the closed, compiled nature of unikernels, logging within a unikernel is important practice. In fact, it’s a ripe opportunity for an entrepreneurial developer.  It’s not unreasonable to think that as the technology spreads, someone will publish an easy to use unikernel logger. Who knows, that someone might be you.

Putting it all together

Unikernels are going to be with us for a while. They are are small and load fast, making them a good choice for implementing transient microservices. And, because the are small, you can load thousands into a machine. Each unikernel has an unique IP address which means that you can cluster them to a common purpose behind a load balancer. Also, the IP address allows you to use standard communication protocols to have the unikernels talk to each other.

But, for as much activity that is going on in the space, unikernels are still an emerging technology. The toolset to create unikernels is growing. Players such as Docker and Xen are active players in unikernel development.

Given that a unikernel is a compiled binary that is nearly impossible to debug, logging information out of your unikernel implementation becomes critical. However, unikernel loggers are still a roll your own undertaking, for now anyway. Those in the know suggest that when you make a logging client, you emit log data according to the Syslog protocol and use logging services such Logentries to store log data for later analysis,

As I’ve mentioned over and over, unikernels are going to be with us for a while. Those who understand the details of unikernel technology and have made a few mission critical microservices using unikernels will be greatly in demand in no time at all. Unikernels are indeed the next big thing and the next big opportunity.

(Via Blog.logentries.com)

Design Of A Modern Cache

Caching is a common approach for improving performance, yet most implementations use strictly classical techniques. In this article we will explore the modern methods used by Caffeine, an open-source Java caching library, that yield high hit rates and excellent concurrency. These ideas can be translated to your favorite language and hopefully some readers will be inspired to do just that.

Eviction Policy

A cache’s eviction policy tries to predict which entries are most likely to be used again in the near future, thereby maximizing the hit ratio. The Least Recently Used (LRU) policy is perhaps the most popular due to its simplicity, good runtime performance, and a decent hit rate in common workloads. Its ability to predict the future is limited to the history of the entries residing in the cache, preferring to give the last access the highest priority by guessing that it is the most likely to be reused again soon.

Modern caches extend the usage history to include the recent past and give preference to entries based on recency and frequency. One approach for retaining history is to use a popularity sketch (a compact, probabilistic data structure) to identify the “heavy hitters” in a large stream of events. Take for example CountMin Sketch, which uses a matrix of counters and multiple hash functions. The addition of an entry increments a counter in each row and the frequency is estimated by taking the minimum value observed. This approach lets us tradeoff between space, efficiency, and the error rate due to collisions by adjusting the matrix’s width and depth.

sPS-0VWjzfHp6DtvEQKU0Hg

Window TinyLFU (W-TinyLFU) uses the sketch as a filter, admitting a new entry if it has a higher frequency than the entry that would have to be evicted to make room for it. Instead of filtering immediately, an admission window gives an entry a chance to build up its popularity. This avoids consecutive misses, especially in cases like sparse bursts where an entry may not be deemed suitable for long-term retention. To keep the history fresh an aging process is performed periodically or incrementally to halve all of the counters.

sHXTg0FelH7DQ0ctP3zAnYg

 

W-TinyLFU uses the Segmented LRU (SLRU) policy for long term retention. An entry starts in the probationary segment and on a subsequent access it is promoted to the protected segment (capped at 80% capacity). When the protected segment is full it evicts into the probationary segment, which may trigger a probationary entry to be discarded. This ensures that entries with a small reuse interval (the hottest) are retained and those that are less often reused (the coldest) become eligible for eviction.

database

search

As the database and search traces show, there is a lot of opportunity to improve upon LRU by taking into account recency and frequency. More advanced policies such as ARC, LIRS, and W-TinyLFU narrow the gap to provide a near optimal hit rate. For additional workloads see the research papers and try our simulator if you have your own traces to experiment with.

Expiration Policy

Expiration is often implemented as variable per entry and expired entries are evicted lazily due to a capacity constraint. This pollutes the cache with dead items, so sometimes a scavenger thread is used to periodically sweep the cache and reclaim free space. This strategy tends to work better than ordering entries by their expiration time on a O(lg n) priority queue due to hiding the cost from the user instead of incurring a penalty on every read or write operation.

Caffeine takes a different approach by observing that most often a fixed duration is preferred. This constraint allows for organizing entries on O(1) time ordered queues. A time to live duration is a write order queue and a time to idle duration is an access order queue. The cache can reuse the eviction policy’s queues and the concurrency mechanism described below, so that expired entries are discarded during the cache’s maintenance phase.

Concurrency

Concurrent access to a cache is viewed as a difficult problem because in most policies every access is a write to some shared state. The traditional solution is to guard the cache with a single lock. This might then be improved through lock striping by splitting the cache into many smaller independent regions. Unfortunately that tends to have a limited benefit due to hot entries causing some locks to be more contented than others. When contention becomes a bottleneck the next classic step has been to update only per entry metadata and use either a random sampling or a FIFO-based eviction policy. Those techniques can have great read performance, poor write performance, and difficulty in choosing a good victim.

An alternative is to borrow an idea from database theory where writes are scaled by using a commit log. Instead of mutating the data structures immediately, the updates are written to a log and replayed in asynchronous batches. This same idea can be applied to a cache by performing the hash table operation, recording the operation to a buffer, and scheduling the replay activity against the policy when deemed necessary. The policy is still guarded by a lock, or a try lock to be more precise, but shifts contention onto appending to the log buffers instead.

In Caffeine separate buffers are used for cache reads and writes. An access is recorded into a striped ring buffer where the stripe is chosen by a thread specific hash and the number of stripes grows when contention is detected. When a ring buffer is full an asynchronous drain is scheduled and subsequent additions to that buffer are discarded until space becomes available. When the access is not recorded due to a full buffer the cached value is still returned to the caller. The loss of policy information does not have a meaningful impact because W-TinyLFU is able to identify the hot entries that we wish to retain. By using a thread-specific hash instead of the key’s hash the cache avoids popular entries from causing contention by more evenly spreading out the load.

sxSo8y3_uR8QZ_NbIqnCwzA

In the case of a write a more traditional concurrent queue is used and every change schedules an immediate drain. While data loss is unacceptable, there are still ways to optimize the write buffer. Both types of buffers are written to by multiple threads but only consumed by a single one at a given time. This multiple producer / single consumer behavior allows for simpler, more efficient algorithms to be employed.

The buffers and fine grained writes introduce a race condition where operations for an entry may be recorded out of order. An insertion, read, update, and removal can be replayed in any order and if improperly handled the policy could retain dangling references. The solution to this is a state machine defining the lifecycle of an entry.

scG-jsP3Cfr4YfwBQn9_bpQ

In benchmarks the cost of the buffers is relatively cheap and scales with the underlying hash table. Reads scale linearly with the number of CPUs at about 33% of the hash table’s throughput. Writes have a 10% penalty, but only because contention when updating the hash table is the dominant cost.

png;base641c7880f2120d6f20

Conclusion

There are many pragmatic topics that have not been covered. This could include tricks to minimize the memory overhead, testing techniques to retain quality as complexity grows, and ways to analyze performance to determine whether a optimization is worthwhile. These are areas that practitioners must keep an eye on, because once neglected it can be difficult to restore confidence in one’s own ability to manage the ensuing complexity.

The design and implementation of Caffeine is the result of numerous insights and the hard work of many contributors. Its evolution over the years wouldn’t have been possible without the help from the following people: Charles Fry, Adam Zell, Gil Einziger, Roy Friedman, Kevin Bourrillion, Bob Lee, Doug Lea, Josh Bloch, Bob Lane, Nitsan Wakart, Thomas Müeller, Dominic Tootell, Louis Wasserman, and Vladimir Blagojevic.

(via HighScalability.com)

ejabberd Massive Scalability: 1 Node — 2+ Million Concurrent Users

How Far Can You Push ejabberd?

From our experience, we all get the idea that ejabberd is massively scalable. However, we wanted to provide benchmark results and hard figures to demonstrate our outstanding performance level and give a baseline about what to expect in simple cases.

That’s how we ended up with the challenge of fitting a very large number of concurrent users on a single ejabberd node.

It turns out you can get very far with ejabberd.

Scenario and Platforms

Here is our benchmark scenario: Target was to reach 2,000,000 concurrent users, each with 18 contacts on the roster and a session lasting around 1h. The scenario involves 2.2M registered users, so almost all contacts are online at the peak load. It means that presence packets were broadcast for those users, so there was some traffic as an addition to packets handling users connections and managing sessions. In that situation, the scenario produced 550 connections/second and thus 550 logins per second.

Database for authentication and roster storage was MySQL, running on the same node as ejabberd.

For the benchmark itself, we used Tsung, a tool dedicated to generating large loads to test servers performance. We used a single large instance to generate the load.

Both ejabberd and the test platform were running on Amazon EC2 instances. ejabberd was running on a single node of instance type m4.10xlarge (40 vCPU, 160 GiB). Tsung instance was identical.

Regarding ejabberd software itself, the test was made with ejabberd Community Server version 16.01. This is the standard open source version that is widely available and widely used across the world.

The connections were not using TLS to make sure we were focusing on testing ejabberd itself and not openSSL performance.

Code snippets and comments regarding the Tsung scenario are available for download: tsung_snippets.md

Overall Benchmark Results

ejabberd_hs4

We managed to surpass the target and we support more than2 million concurrent users on a single ejabberd.

For XMPP servers, the main limitation to handle a massive number of online users is usually memory consumption. With proper tuning, we managed to handle the traffic with a memory footprint of 28KB per online user.

The 40 CPUs were almost evenly used, with the exception of the first core that was handling all the network interruptions. It was more loaded by the Operating System and thus less loaded by the Erlang VM.

In the process, we also optimized our XML parser, released now as Fast XML, a high-performance, memory efficient Expat-based Erlang and Elixir XML parser.

Detailed Results

ejabberd Performance

ejabberd_performance-2-1024x640

Benchmark shows that we reached 2 million concurrent users after one hour. We were logging in about 33k users per minute, producing session traffic of a bit more than 210k XMPP packets per minute (this includes the stanzas to do the SASL authentication, binding, roster retrieval, etc). Maximum number of concurrent users is reached shortly after the 2 million concurrent users mark, by design in the scenario. At this point, we still connect new users but, as the first users start disconnecting, the number of concurrent users gets stable.

As we try to reproduce common client behavior we setup Tsung to send “keepalive pings” on the connections. Since each session sends one of such whitespace pings each minute, the number of such requests grows proportionately with the number of connected users. And while idle connections consume few resources on the server, it is important to note that in this scale they start to be noticeable. Once you have 2M users, you will be handling 33K/sec of such pings just from idle connections. They are not represented on the graphs, but are an important part of the real life traffic we were generating.

ejabberd Health

ejabberd_health-1024x640

At all time, ejabberd health was fine. Typically, when ejabberd is overloaded, TCP connection establishment time and authentication tend to grow to an unacceptable level. In our case, both of those operations performed very fast during all bench, in under 10 milliseconds. There was almost no errors (the rare occurrences are artefacts of the benchmark process).

Platform Behavior

ejabberd_platform-1-1024x640

Good health and performance are confirmed by the state of the platform. CPU and memory consumption were totally under control, as shown in the graph. CPU consumption stays far from system limits. Memory grows proportionally to the number of concurrent users.

We also need to mention that values for CPUs are slightly overestimated as seen by the OS, as Erlang schedulers stay a bit of busy waiting when running out of work.

Challenge: The Hardest Part

The hardest part is definitely tuning the Linux system for ejabberd and for the benchmark tool, to overcome the default limitations. By default, Linux servers are not configured to allow you to handle, nor even generate, 2 million TCP sockets. It required quite a bit of network setup not to have problems with exhausted ports on the Tsung side.

On a similar topic, we worked with the Amazon server team, as we have been pushing the limits of their infrastructure like no one before. For example, we had to use a second Ethernet adapter with multiple IP addresses (2 x 15 IP, spread across 2 NICs). It also helped a lot to use latest Enhanced Networking drivers from Intel.

All in all, it was a very interesting process that helped make progress on Amazon Web Services by testing and tuning the platform itself.

What’s Next?

This benchmark was intended to demonstrate that ejabberd can scale to a large volume and serve as a baseline reference for more complex and full-featured platforms.

Next step is to keep on going with our benchmark and optimization iteration work. Our next target is to benchmark Multi-User Chat performance and Pubsub performance. The goal is to find the limits, optimize and demonstrate that massive internet scale can be done with these ejabberd components as well.

(via Blog.process-one.net)