Piecing together Rust: It is more than just writing code

Time:

Let’s start with a fact that we all know, Learning new things is hard. It moves you out of your comfort zone, bombarding you with a lot of new concepts at a time, and thus throwing your imposter syndrome out the roof.

This talk aims to cover important, widely-used concepts and tools needed to get started with Rust. This covers what (definitions, usage), why (concepts), and how (internal working, integrations) of these tools along with pointers of resources to learn more. This way, the getting started problem can be fixed, allowing beginners to feel comfortable in learning more.

Presented by

  • Tarun Pothulapati Tarun Pothulapati

    Tarun Pothulapati is an Engineer at Buoyant working on Linkerd, An Open Source, CNCF Incubated Service Mesh project. He also spends his time contributing to other OSS projects like Service Mesh Interface (SMI), etc. Recently, He has been very interested in Rust and Networking. He has been trying to not only work on fun projects but also contributes to OSS like tracing, etc in the Rust ecosystem.

  • Resources

    Recordings

    Transcript

    Piecing together Rust: It is more than just writing code

    Bard:
    Tarun helpfully teaches Rust
    to newbies who start out and must
    learn the concepts, the tools
    and the various rules
    until their own experience they trust

    Tarun:
    Hello, everyone, my name is Tarun and in today's talk we will cover the basic principles and tools to get started with Rust without having to do it the hard way. First, let me introduce myself. My name is Tarun and I am an engineer at Buoyant. We have the makers of Linkerd. Previously I was an intern at CNCF. I work on Golang in my current job. Our proxy is domain Rust and we use Rust were other reasons. I also try to contribute to projects in Rust like the tracing project. Once COVID started I thought I would share my learnings so it would be easier for other folks. This talk is in three stages.

    Let's first see the installation. Rust has a great installation experience. Once you get to tool you can use it in multiple ways. Rustup is a toolchain multipleasure -- multiplexor. It installs and managing multiple Rust toolchains. All the tools that make the Rust programming language are there. Each tool chain is like a package and you have multiple variants. One type of variant uses the channels. Rust has three different cycles, stable, beta and nightly and these have releases. The stable release happens every six weeks, and beta happens before every stable release and the nightly is daily. We will see an example. I am using Rust tool chain to install a specific version. We are installing this into our local package. It is still not the run you will use when you run Cargo or any other Rust tool. You will use the Rust default command to make that as the default. Rust internally uses -- whenever you run Cargo it chooses the default tool chain. It is a multiplexor. Let's talk about the compilation and talk about formatting and linting.

    For formatting Rust provides a tool that allows you to style your Rust code on official Rust guidelines. Whenever you run Cargo it runs from the tool internally and fixes all of your code to follow the same guideline. This is useful because the project will be in the same format for the developers to easily read and understand them without having to figure out stuff. Next, Rust has a tool called rust-clippy which minds common mistakes and also find improvements. It has over 400 lint includes and these are pretty awesome and I highly suggest you to have clippy in your TAR and at least development work so you can find common bugs or common improvements. Here I have an example where we have a variable set to 0. Inside we are not updating the variable. It is mostly infinite loop. Now you can run this code and it will run. But if you have clippy and run Cargo clippy it will tell you that the variable that the depends on is not there. It finds better lints than this one. I highly recommend checking out that.

    Next we will talk about IDE experience which is very important it makes developer's job easier. We have rust-analyzer which is a relatively new tool. It is a common open source that allows you multiple compiler frontends to use a common language server so that the language service don't have to be built for each compile or idea. All the compilers talk to the same language server using the language server protocol. One language server can be used with many other IDEs. Using a language server you will get -- this is pretty awesome. Previously there was a tool in Rust doing the same job but it was slow especially for bigger ones. It runs the Cargo compeller, the full Cargo on everything. That is pretty hard right. It gets the JSON documents and then it helps you understand how to do things. As you know for bigger products, leverages are just able to be more dynamic and perform analysis only on the code and it is pretty fast that way. Next we will talk about documentation. Rust documentation is one of my favorite parts because there is a standard way enforced on how to write documentation so it is common across many libraries. I think having this standardized tool made it very easy to ad documentation and that is why we see a lot more docs in the Rust ecosystem. We have a tool called rustdoc. All your documentation is annotated on top of the code so that you don't have to do it separate. Using rustdoc you can generate a site with the UI of all the data. The dashes are used as the syntax for the doc.

    Here we have an implementation with two functions. Each type is annotated with the three slashes. For the first instruct, we have a comment called a human being and instead of the person, we have new and hello. For the new function we have an explanation but also arguments and examples. All these comments are for the markdown so that it is easier. Once you have this code with these types of annotations and you run Cargo docs you get this output. This is the final HTML site that Cargo docs generates with all of the type data and the comments are now converted into documentation which is pretty awesome to look at and pretty useful. Documentation is awesome. Compilation there is not much to talk about. If you do the release, they are present in the folders.

    Next we will talk about testing. In Rust, you have two ways of writing tests. The unit test and integration test. The unit test -- for the integration test you have to put them in the test directory especially because you want to be able to test your code like an external and binary uses. It helps you by having them in the test folder. Once you have those tests, every test is attributed so whenever you run Cargo test it recognizes functions that are test and runs them as a binary and outputs the results. So, for example, here we have a module with the function called it works. The first is a configure environment. If it is not a test it is not compiled. That is pretty good. We have a test and the test passes. Whenever we run Cargo test, that test is run and the output is recorded. There are arguments on specific tests. Next is package management. Rust package management is a pretty full-fledged package tool by Cargo -- package management is talking about Cargo. Cargo is a full-fledged tool. We have been using it for various things.

    First we will talk about dependency management. Cargo is more than a dependency management. What is Cargo? Cargo allows you to manage dependencies and have repeatable builds. It does this with two files. In the developer facing part, you edit the package. The Cargo is used by the compiler to maintain the state of the project. But Cargo also introduces a package layout that will follow. Cargo is like an umbrella to do most of the operations like testing docs. We will see an example. We have a Cargo file here that is for example package. Whenever we Cargo build, these packages are fetched and linked with that library so the R package can use them. Next we will talk about workspaces. As we saw previously there are use-cases of multiple packages. You may have divided your library into multiplies as it grows and gets better. Workspace allows you to group multiple packages. This is done by creating a package in the binary or library crate. They share the same common Cargo.toml. We have there workspace including all these tracings and they can be binary or library plates.

    Next we will talk about features. One of my favorite features in Rust is the compiler feature flex which allows you to effect the compilation essentially. For example, I am the owner of a library and I want to offer people a variant of my library. I want to offer a variant where I don't take a different -- standard library. This is important for me because I want the users of my library to not have to take dependencies because I am taking a dependency. I want to offer variant of my library where I am not taking a dependency. This is useful for embedded systems. Offer multiple variant of their libraries. For example, they can offer a variant where there is no dependency on printing Etc. First, the important thing is to note if feature is a package it is an optional dependency or a set of other features. Now we will see an example. We have a -- this is taken from the tracing grid. In the dependency section, we can see we take a dependency on the laser static and it is optional. Now in the feature section, we have two. Alloc and std. Lazy static internally depends on a grid and internally. Because you are taking a dependence on lazy static -- if it wasn't for the feature Alloc they are not taking that. Alloc doesn't take a dependence on any library. You can ask me how is my grid able to offer two variants? How is the code part separated, essentially? This is done by using the cfg table. We have the implementation of inner module. In the first case, where we annotated, this is included in the feature std is enabled and in the second case not. This happens when std is not enabled. Different dependency or without that dependency and in a more hard format for you. The consumers of this library will prefer that feature flag. In their binary grid, they depend on the tracing code package and the first disable the default features because they don't want to opt-into the default feature and disable them and enable the Alloc feature. They want that variant of the library and they want the variant where there is no dependency on lazy stack ge this is awesome because it allows you to have multiple variants of the package to support multiple use-cases and systems.

    Next we will talk about binary management. Cargo tool is also very extensible in itself. You can use Cargo to essentially run external binary tools. This is done whenever you run Cargo and a binary names. If it is not present it expands it into cargo-expand meaning Cargo is expandable on it's own. How is this possible? -- its. This is made easier by having the Cargo install which can be used to install binaries. These binaries are installed to home-Cargo. We will try to install the Cargo expand binary. Whenever we install Cargo expand tool, we look at the dash, and that tool is envoked and the Cargo expand tool is an awesome tool. This helped me in learning Rust.

    As you can see, the printer lint function is expanded. Now let's talk about debugging. In debugging first we will tart with logging, tracing and then GDB. First, logging. In Rust, there is a crate called log that provides a simple API with multiple log levels that you can use to emit events. It abstracts over the actual logging implementation. If your library is using the log grid, the macros from the lock grid, there is no default implementation log implementation. It does not emit. So the consumer of the library, like it could be a binary force would force the log implementation used by itself and other dependencies of that grid. Essentially, the consumer can decide which log implementation and the logging library will just use the API. The order for the library is pretty small. It provides a very simple API to have your own log implementation. Here is an example using the macros from the log grid and in the third line we are using to report action. You can use the log levels and emit events in your libraries or binaries essentially. The logging implementation is in the main file. You can use the setlogger function to set the logger. The logger is implementation that all the formats would send the events to. If a project is a library, you will not set this and there is no image because you are not able to run the library on itself. Once you use that library as a dependency and binary project, they will use that function to set the log implementation which is already like that.

    Here is an example of some logs that were instrumented using the log grid. It is a very simple set of logs. Next we will talk about tracing. As you saw, the logs are pretty simple and pretty hard to comprehend because there is no contextual information. Tracing allows you to have contextual information. It is more than logging library but provides the same simple API for consumers. It introduces a new primitive called Span. Performing a function is a Span. These spans can have -- a function span has multiple sub-spans. This provides distributed implementation or asynchronous systems like Tokyo and Etc. You don't have to change a lot. You replace the log trace with the tracing and everything should work. They both use the same API essentially. As we saw, as I mentioned, the spans, the parent spans. Here we are instrumenting the connect two function with the instrument attribute. It will expand automatically once the span is started and it will close the span once the function is ended. On the trace events that are happening inside spans diameter are essentially function spans. Now this will produce more contextual logs like this. We have the load function which has multiple requests and error unknown and this means that there is more contextual data added to your logs rather than a single log message that is very hard to comprehend on which lifecycle. Next we will talk about the debugging. GNU project debugger allows you to understand what is on in the program whiles it is executing. It allows you to retrieve information. It has a support for multiple languagesism there is the Rust-gdb which is a wrapper to provide a more easier understanding.

    We have the same similar program here that runs in a loop, essentially. Here we use the gdb tool to compile that. We have to compile that in debug mode to get the symbols out. If you want to do the same, you can enable debugging in the Cargo and you will get the symbols in your binary and then run that using the gdb command and wrap that and once you do that then gdb provides an interface to interact. We set a break point at line 8. Then we are running the program using the R command. Once you do that, the program is running until the 8th line and then it break the 8th line and essentially we can print here and it is three. It allows you to have run time capabilities on what is happening in the run time. This is very useful if you are trying to find hard bugs in runtime. Gdb provides various other features. It is a pretty standard reject -- project for debugging in Linux. All these tools should work with Rust. Thank you. Now I will take questions.

    Moderator:
    OK. Thanks, Tarun, for quite the informative talk. Yeah, it is covering I would say a lot of areas and various topics. It looks like it should be a good overview for the newbies. It should be informative. It was informative to me as well. We are running out of time. But I have one question. This question come from the chat. Any front end on gdb you like to use?

    Tarun:
    I don't work on complex systems. I know there are a lot on gdb you can use but I am not sure I can add anything here. One other thing I wanted to end my talk is we covered a lot of things so if you have not understood anything, please, don't feel intimidated. There are good resources online for Cargo and books for everything. Feel free to check them out to make sure your understanding is correct. There are great resources on it for sure.

    Moderator:
    Yeah, for sure, we have a lot of good documents online. OK. Thanks, again, for a good presentation. And next session starting in 10 minutes. See you later. OK.

    Tarun:
    Thank you, everyone.