by Chasea blog about functional programming, and other items of interest.https://by-cha.se/2018-04-19T23:16:00-04:00Chase GilliamRuby Memoizationhttps://by-cha.se/ruby-memoization.html2018-04-19T23:16:00-04:002019-05-12T16:24:03-04:00Chase Gilliam<h2>Momoization</h2>
<p>If you are unfamiliar with memoization, it is a technique for improving a program’s execution time <em>(at times other purposes)</em> by storing the results of expensive function calls and returning the stored value when the function is called with the same inputs. This is a speific form of caching. The technique was first described in 1968 by Donald Michie in the paper <a href="https://www.cs.utexas.edu/users/hunt/research/hash-cons/hash-cons-papers/michie-memo-nature-1968.pdf"><em>Memo Functions and Machine Learning</em></a>. You can learn more from <a href="https://en.wikipedia.org/wiki/Memoization">Wikipedia</a>.</p>
<h3>Ruby’s Conditional Assignment Operator</h3>
<p>Ruby provides the <code>||=</code> operator, which is often called the conditional assignment operator, or the “or-equals sign”. It can be thought of as a simple memoization operator as well. Assuming <code>my_shirt ||= get_a_shirt_from_the_closet</code>, if the variable <code>my_shirt</code> points to and object that is “truthy” then sending that object the message <code>||=</code> causes <code>my_shirt</code> to return iteself. However, if <code>my_shirt</code> is “falsy”, then <code>||=</code> evaluates the method <code>get_a_shirt_from_the_closet</code> and sets the result to the variable <code>my_shirt</code> in the present context.</p>
<p>Note that <code>a ||= b</code> is not logically equivalent to <code>a = a || b</code>, it can more accurately be described as <code>a || a = b</code>, as long as <code>a</code> is a bound variable. The <code>||=</code> operator may also be used to bind the result of the right hand side to an unbound variable on the left hand side.</p>
<p>This works quite well when <code>get_a_shirt_from_the_closet</code> takes no arguments and is otherwise idempotent. If however, you expect to pass arguments to the mothod, or get different results at different times, based on the lastest data from a database for example, then <code>||=</code> will keep returning stale data.</p>
<h3>Hash Based Memoization</h3>
<p>Another strategy for memoization in ruby is to use a hash to store values, which is useful when you have both expensive computations and wnat to pass arguments to the method that is performing the work. Consider the following code.</p>
<pre><code class="ruby"> class Fib
def initialize
@answers = {}
end
def cached(n)
@answers[n]
end
def fibonacci(n)
return n if n <= 1
@answers[n] ||= (fibonacci(n - 1) + fibonacci(n - 2))
end
end
</code></pre>
<p>On initialization, the class creates an empty hash that will be used to store previously computed values. When <code>Fib.fibonacci(n)</code> is called for <code>n</code> greater than the base cases of <code>(0,1)</code>, the method first checks <code>@answers[n]</code>, to see if <code>fibonacci(n)</code> has already been computed. This is a perfectly good use of memoization, but this implementation is a recursive approach to calculating the fibonacci number, so there’s an added twist. If <code>n</code> hasn’t been pre-calculated, but <code>n - 1</code> or <code>n - 2</code> have been, those recursive calls return memoized values. If neither is memoized, then the function keeps recursively doing work until it hits a previously calculated value. This approach has the advantage of becoming faster over time, with the tradeoff that it requires O(n) space.</p>
<h3>A Word About Caching</h3>
<p>This type of memoization is a simple and unsophisticated approach to caching. If you want to persist data across time, servers, http requests or something similar, you should be using a more robust caching solution. Rails now comes with a really nice <a href="http://guides.rubyonrails.org/caching_with_rails.html">built in caching layer</a>, and there are a number of gems availale for caching like <a href="https://github.com/redis-store/redis-store">Redis-Store</a>.</p>
<h3>In Closing</h3>
<p>If you need to store the calculations performed by an idempotent method that will get reused within a class, then consider using the <code>||=</code> operator. If you need to cache for multiple values, then a hash may be appropriate. However, if you have more complicated needs, then use a purpose built caching solution.</p>
<p>Further Reading:</p>
<p>Peter Cooper’s <a href="http://www.rubyinside.com/what-rubys-double-pipe-or-equals-really-does-5488.html"><em>What Ruby’s ||= (Double Pipe / Or Equals) Really Does</em></a></p>
<p>Justin Weiss’ <a href="https://www.justinweiss.com/articles/4-simple-memoization-patterns-in-ruby-and-one-gem/"><em>4 Simple Memoization Patterns in Ruby</em></a></p>
<p>David Fayram’s excellent <a href="http://dave.fayr.am/posts/2011-10-4-rubyists-already-use-monadic-patterns.html">*
Rubyists Already Use Monadic Patterns
*</a></p>
Protobuf in Elixir with Exprotobufhttps://by-cha.se/protobuf-in-elixir-with-exprotobuf.html2018-02-10T20:30:00-05:002019-05-12T16:24:50-04:00Chase Gilliam<h2>What is Protobuf</h2>
<p>Protobuf, or protocol buffers are at their core a means of serializing structured data. Protocol buffers occupy a use case where XML was dominant in the past and where JSON is lacking, passing structured data between systems. Compared to XML, protobuf is a much simpler standard, binary, an order of magnitude smaller, up to two orders of magnitude faster to serialize/deserialize, and <a href="https://developers.google.com/protocol-buffers/docs/overview#whynotxml">claims to have other benefits</a>. You might consider protobuf when sending things like structured logging data to other servers, if you are working with <a href="https://grpc.io/docs/">gRPC</a>, or if you want clients to be able to generate code to consume your api.</p>
<h2>Protobuf in Elixir</h2>
<p>There are a couple of options for working with protobuf in Elixir, with <a href="https://github.com/bitwalker/exprotobuf">exprotobuf</a>, being the easiest to get started with, and <a href="https://github.com/tony612/protobuf-elixir">protobuf-elixir</a> being more full featured nd standards compliant. I ended up choosing exprotobuf for this article because I got it working first and didn’t have to install the protobuf compiler. That said, I would probably use protobuf-elixir because it supports code generation and doesn’t rely on string templates. It also doesn’t rely on the Erlang library <a href="https://github.com/tomas-abrahamsson/gpb">gpb</a>, though I’m not sure how much I care about that. I’ll probably write a future version of the following guide targeting protobuf-elixir if there is sufficient interest.</p>
<h2>Getting Started</h2>
<p>The simple mix application for this guide provides an api for reading/creating <a href="https://en.wikipedia.org/wiki/Mega_Man">MegaMan</a> androids. It was a completely frivolous choice, but I didn’t want to track down a logging service that supports protobuf, and I really dislike code example that uses blog posts, comments, or address books. Those things seem to carry a lot of mental baggage for me, and I like to model other data when learning new tools. You can find the Github repo <a href="https://github.com/ch4s3/proto_man">here</a>.</p>
<p>First let’s spin up a new mix app with a supervisor so that we can run a client and server from <code>iex</code>.</p>
<pre><code class="bash"> mix new proto_man --sup
</code></pre>
<p>This will generate the usual structure, but with an <code>application.ex</code> that gives you a bare bones <a href="https://hexdocs.pm/elixir/Supervisor.html">supervisor</a>.</p>
<p>Next, let’s install <a href="https://github.com/ninenines/cowboy">Cowboy</a>, <a href="https://github.com/elixir-plug/plug">Plug</a>, and <a href="https://github.com/edgurgel/httpoison">HTTPoison</a>, so that we can make and serve requests. Add the following to <code>mix.exs</code>.</p>
<pre><code class="elixir"> defp deps do
[
{:cowboy, "~> 1.1.2 "},
{:httpoison, "~> 1.0"},
{:plug, "~> 1.5.0-rc.1"}
]
end
</code></pre>
<p>After running <code>mix deps.get</code>, we will create a simple router module to serve responses to requests.</p>
<pre><code class="elixir"> defmodule ProtoMan.Router do
use Plug.Router
plug :match
plug :dispatch
get "/androids" do
send_resp(conn, 200, "this will return androids soon")
end
post "/androids" do
send_resp(conn, 501, "nothing to post to yet")
end
match _ do
send_resp(conn, 404, "oops")
end
end
</code></pre>
<p><em>There is a correction as of 2/13/18 in the post function, the original version was missing the conn arg.</em></p>
<p>Next, let’s register this with the supervisor in <code>lib/proto_man/application.ex</code> </p>
<pre><code class="elixir"> defmodule ProtoMan.Application do
# See https://hexdocs.pm/elixir/Application.html
# for more information on OTP Applications
@moduledoc false
use Application
def start(_type, _args) do
# List all child processes to be supervised
children = [
Plug.Adapters.Cowboy.child_spec(:http, ProtoMan.Router, [], [port: 4001])
]
# See https://hexdocs.pm/elixir/Supervisor.html
# for other strategies and supported options
opts = [strategy: :one_for_one, name: ProtoMan.Supervisor]
Supervisor.start_link(children, opts)
end
end
</code></pre>
<h2>Adding a Protocol Buffer Message</h2>
<p>Now, if you run <code>iex -S mix</code> you should be able to run <code>curl http://localhost:4001/androids</code> in another terminal tab and get a response. This is a good time to start working with the actual protocol buffers for our app. Create an <code>Androids</code> submodule in <code>lib/proto_man</code> that looks like the following.</p>
<pre><code class="elixir"> defmodule ProtoMan.Androids do
use Protobuf, """
message Android {
message Health {
required uint32 value = 1;
}
enum SpecialWeapon {
MegaBuster = 0;
AtomicFire = 1;
ProtoShield = 2;
AtomicFire = 3;
DrillBomb = 4;
}
enum Version {
V1 = 1;
V2 = 2;
}
required string name = 1;
required SpecialWeapon special_weapon = 2;
required Version version = 3;
optional Health hp = 4;
}
"""
def safe_decode(bytes) do
try do
{:ok, ProtoMan.Androids.Android.decode(bytes)}
rescue
ErlangError ->
{:error, "Error encoding data"}
end
end
end
</code></pre>
<p>If you are using the excellent <a href="https://github.com/JakeBecker/elixir-ls">ElixirLS</a> (Elixir language server) for vscode, or credo you are likely to see some errors in this file related to the quoted string, but it shouldn’t cause any real issues. This is another reason I might consider protobuf-elixir.</p>
<p>The <code>use Protobuf</code> macro from exprotobuf takes a quoted string of protobuf syntax and generates encoders and decoders for the data as well as an Elixir struct definition. Note that protocol buffers are organized as messages, and messages may have sub-messages. You can read more about the format <a href="https://developers.google.com/protocol-buffers/docs/overview#how-do-they-work">here</a>. Our Android message has a required name, two required enums, and an optional sub message. Distinguishing optional and required fields is a really nice feature of protocol buffers, and allows for succinct interactions when only some fields are needed. </p>
<p>I’m also including a wrapper for decoding our messages, because invalid input will cause gpb to throw an error, that I prefer to handle at a higher level. This will be helpful for debugging and testing messages, and we will use it in the final router to handle parsing.</p>
<h3>Getting something useful</h3>
<p>Next, let’s fill in the get function in <code>router.ex</code>. Add <code>alias ProtoMan.Androids</code> to the top of the module, and edit the get function to look like the following.</p>
<pre><code class="elixir"> get "/androids" do
android =
Androids.Android.new(name: "Rock",
special_weapon: :ProtoShield,
version: :'V1',
hp: %Androids.Android.Health{value: 100})
resp = Androids.Android.encode(android)
conn
|> put_resp_header("content-type", "application/octet-stream")
|> send_resp(200, resp)
end
</code></pre>
<p>You can see the use of the generated encoder for the Android message here. If you add a call to <code>IEx.pry</code> after encoding you can inspect the response and see the binary output, <code><<10, 4, 82, 111, 99, 107, 16, 2, 24, 1, 34, 2, 8, 100>></code>. One of the reasons that protocol buffers are so small an fast is because they are transmitted in a binary format, rather than plain text like XML or JSON. As an aside, you could use Elixir’s excellent binary pattern matching to build a simple, but fast, parser for protocol buffers. You may note that the response header is “application/octet-stream”, this isn’t strictly necessary and there is no official content type, but a search of StackOverflow turned up a <a href="https://stackoverflow.com/questions/30505408/what-is-the-correct-protobuf-content-type">discuss</a> that lead me to this choice. At this point you could use curl to check the endpoint, but you wouldn’t see anything, since curl isn’t really built to work with protobuf. </p>
<p>We would like to see some output, so let’s write a quick client that we can run from the same iex session.</p>
<pre><code class="elixir"> defmodule ProtoMan.Client do
require Logger
alias ProtoMan.Androids
HTTPoison.start
def get() do
Logger.info fn -> "Calling for Android list" end
res = HTTPoison.get! "http://localhost:4001/androids"
IO.inspect(res.body)
Logger.info fn -> "Android response code: #{res.status_code}" end
Androids.Android.decode(res.body)
end
end
</code></pre>
<p>If you restart your iex session and run <code>ProtoMan.Client.get()</code> you should see the decoded version of the message.</p>
<pre><code class="elixir"> iex(0)> ProtoMan.Client.get()
%ProtoMan.Androids.Android{
hp: %ProtoMan.Androids.Android.Health{value: 100},
name: "Rock",
special_weapon: :ProtoShield,
version: :V1
}
</code></pre>
<p>Congratulations, you have now sent and received a protocol buffer message.</p>
<h2>Looking at proto files</h2>
<p>Now that we have a minimal get function in the client, let’s take a moment to look at the other functionality in exprotobuf for defining messages. The library also comes with functionality for defining messages in <code>.proto</code> files, which is more in line with best practices, is more appropriate for production use, and shouldn’t upset your linter. If you’re using vscode, install <a href="https://github.com/zxh0/vscode-proto3">vscode-proto3</a> so that you can make use of syntax highlighting. Atom has <a href="https://github.com/podgib/atom-protobuf">atom-protobuf</a>. Once you have done that, create a folder in <code>lib</code> called <code>proto</code> and add a file called <code>messages.proto</code>. We’ll be using this to pass status messages back from the post route to our client. The following should be sufficient for that purpose.</p>
<pre><code class="proto"> message Message {
enum Status {
OK = 0;
ERROR = 1;
}
required string text = 1;
required Status status = 2;
}
</code></pre>
<p>The message should be self explanatory, but note that protocol buffer enums use all caps names. Next add a corresponding Elixir module in <code>lib/proto_man/messages.ex</code>.</p>
<pre><code class="elixir"> defmodule ProtoMan.Messages do
use Protobuf, from: Path.expand("../proto/messages.proto", __DIR__)
end
</code></pre>
<p>The <code>use Protobuf</code> macro we saw earlier may also be passed a file, and will similarly generate encoders, decoders,a nd a struct definition.</p>
<h2>Posting and Receiving Messages</h2>
<p>Now that we have a client, server, and two message types to work with, we can round out the router with a post function that can handle incoming protocol buffer messages. This is the final routing module.</p>
<pre><code class="elixir"> defmodule ProtoMan.Router do
use Plug.Router
alias ProtoMan.{Androids, Messages}
plug :match
plug :dispatch
get "/androids" do
android =
Androids.Android.new(name: "Rock",
special_weapon: :ProtoShield,
version: :'V1',
hp: %Androids.Android.Health{value: 100})
resp = Androids.Android.encode(android)
conn
|> put_resp_header("content-type", "application/octet-stream")
|> send_resp(200, resp)
end
post "/androids" do
with {:ok, proto_bytes, _conn} <- Plug.Conn.read_body(conn),
{:ok, _android} <- Androids.safe_decode(proto_bytes),
message <- Messages.Message.new(text: "successfully posted", status: :OK),
resp <- Messages.Message.encode(message)
do
conn
|> put_resp_header("content-type", "application/octet-stream")
|> send_resp(200, resp)
else
{:error, error} ->
message = Messages.Message.new(text: error, status: :ERROR)
resp = Messages.Message.encode(message)
conn
|> put_resp_header("content-type", "application/octet-stream")
|> send_resp(500, resp)
end
end
match _ do
send_resp(conn, 404, "oops")
end
end
</code></pre>
<p>The post function provides a nice opportunity to use Elixir’s with syntax to read the posted message and build a response, or fall off into error handling. Joseph Kain has a nice explanation of <code>with</code> <a href="http://learningelixir.joekain.com/learning-elixir-with/">here</a>. You can see that we’re using the <code>safe_decode/1</code> function from earlier so that we can gracefully handle parsing errors. Otherwise, this works very much like the get function. In a real application, we would probably persist the posted message, or pass it along, but that isn’t really necessary to explore protobuf.</p>
<p>With the router in place, we need a client function to post data, curl and postman don’t support pprotobuf, so we nee to write our own. We will do that in the client. As demonstrated below.</p>
<pre><code class="elixir"> defmodule ProtoMan.Client do
require Logger
alias ProtoMan.{Androids, Messages}
HTTPoison.start
def get() do
Logger.info fn -> "Calling for Android list" end
res = HTTPoison.get! "http://localhost:4001/androids"
IO.inspect(res.body)
Logger.info fn -> "Android response code: #{res.status_code}" end
Androids.Android.decode(res.body)
end
def post(name, special_weapon, version) do
post(name, special_weapon, version, nil)
end
def post(name, special_weapon, version, hp) do
with {:ok, proto_buf_bytes} <- encode(name, special_weapon, version, hp),
{:ok, response} <- HTTPoison.post("http://localhost:4001/androids", proto_buf_bytes) do
Messages.Message.decode(response.body)
else
{:error, error} ->
error
end
end
defp encode(name, special_weapon, version, hp) when is_nil(hp) do
try do
protobuf_bytes =
Androids.Android.new(name: name, special_weapon: special_weapon, version: version)
|> Androids.Android.encode
{:ok, protobuf_bytes}
rescue
ErlangError ->
{:error, "Error encoding data"}
end
end
defp encode(name, special_weapon, version, hp) do
try do
protobuf_bytes =
Androids.Android.new(name: name, special_weapon: special_weapon, version: version, hp: %Androids.Android.Health{value: hp})
|> Androids.Android.encode
{:ok, protobuf_bytes}
rescue
ErlangError ->
{:error, "Error encoding data"}
end
end
end
</code></pre>
<p>Much like the Androids module’s <code>safe_decod/1</code> function, we’re wrapping encoding with functions that handle Erlang errors, and make out sub message optional by using a guard clause. At this point you can restart the iex session and post a message. </p>
<pre><code class="elixir"> iex(1)> ProtoMan.Client.post("ProtoMan", :ProtoShield, :V2, 100)
%ProtoMan.Messages.Message{status: :OK, text: "successfully posted"}
</code></pre>
<p>At this point everything should be working as planned.</p>
<h2>Wrapping Up</h2>
<p>This guide isn’t meant to be an exhaustive treatment of the when, why, and how of using protocol buffers in Elixir, but rather an on ramp for exploring the topic on your own. If you want to know more I would suggest reading the official <a href="https://developers.google.com/protocol-buffers/docs/overview">overview</a>, and digging into either exprotobuf or protobuf-elixir. Bing Han, the author of protobuf-elixir is often on the <a href="https://elixir-slackin.herokuapp.com/">Elixir slack channel</a>, and is quite helpful. The <a href="https://elixirforum.com/">Elixir Forum</a> is also a great place to get help and advice. As always, feel free to reach out to me if you have any questions or comments, and thanks for reading!</p>
Working with the Elixir ASThttps://by-cha.se/working-with-the-elixir-ast.html2018-01-30T11:09:00-05:002019-05-12T16:27:06-04:00Chase Gilliam<p>An <a href="https://en.wikipedia.org/wiki/Abstract_syntax_tree">Abstract Syntax Tree</a> (AST) is a tree based data structure that represents the structure of some code. It is abstract because it doesn’t capture every concrete detail of the code’s specific syntax. Some aspects are capture by the structure of the tree itself and the relationships amongst the nodes. Lisp users will be intimately familiar with the concept, as Lisp’s S-Expressions from a tree that is both a syntax tree and the concrete code. AST’s are used as intermediate representations of code by parsers and compilers when compiling and executing code.</p>
<p>Elixir’s AST is accessible from the language itself without any special tools, which isn’t necessarily the case with similar languages. This is useful for understanding how aspects of the language work, and is related to how <a href="https://elixir-lang.org/getting-started/meta/macros.html">macros</a> work in the language. The Elixir AST represents code using <code>tuples</code> with 3 elements: the function name, metadata, and the function’s arguments. This forms a tree because <code>defmodule</code> is a macro(special function) of <a href="https://hexdocs.pm/elixir/Kernel.html#defmodule/2">Kernel</a> with the arguments <code>alias</code> which is the name, and <code>do_block</code> which is the module’s code. The AST can be accessed using the Kernel method <code>quote</code>, of <code>Code.string_to_quoted/1</code> if you want to load a string or read code from a file.</p>
<p>Before getting continuing, consider checking out the <a href="https://github.com/Ch4s3/ex_ast">accompanying code</a> on GitHub.</p>
<p>Consider the following module from the repo.</p>
<pre><code class="elixir"> defmodule Examples.HttpGetter do
import SweetXml
def get do
HTTPoison.start
HTTPoison.get!("https://en.wikipedia.org/wiki/Prospect_Park_(Brooklyn)")
|> body
|> parse_body
end
def body(res) do
res.body
end
def parse_body(body) do
body |> xpath(~x"//span[text()='Overview']/following::p[descendant-or-self::text()]")
end
end
</code></pre>
<p><em>The functionality isn’t important, but it has a nice SweetXml example as a bonus</em></p>
<p>This module can be turned into an AST by passing it’s file to <code>Code.string_to_quoted/1</code>.</p>
<pre><code class="elixir"> {:ok, ast} =
"lib/examples/http_getter.ex"
|> File.read!
|> Code.string_to_quoted
</code></pre>
<p>The AST will look like the following.</p>
<pre><code class="elixir"> {:defmodule, [line: 1],
[
{:__aliases__, [line: 1], [:Examples, :HttpGetter]},
[
do: {:__block__, [],
[
{:import, [line: 2], [{:__aliases__, [line: 2], [:SweetXml]}]},
{:def, [line: 3],
[
{:get, [line: 3], nil},
[
do: {:__block__, [],
[
{{:., [line: 4],
[{:__aliases__, [line: 4], [:HTTPoison]}, :start]},
[line: 4], []},
{:|>, [line: 7],
[
{:|>, [line: 6],
[
{{:., [line: 5],
[{:__aliases__, [line: 5], [:HTTPoison]}, :get!]},
[line: 5],
["https://en.wikipedia.org/wiki/Prospect_Park_(Brooklyn)"]},
{:body, [line: 6], nil}
]},
{:parse_body, [line: 7], nil}
]}
]}
]
]},
{:def, [line: 10],
[
{:body, [line: 10], [{:res, [line: 10], nil}]},
[
do: {{:., [line: 11], [{:res, [line: 11], nil}, :body]},
[line: 11], []}
]
]},
{:def, [line: 14],
[
{:parse_body, [line: 14], [{:body, [line: 14], nil}]},
[
do: {:|>, [line: 15],
[
{:body, [line: 15], nil},
{:xpath, [line: 15],
[
{:sigil_x, [line: 15],
[
{:<<>>, [line: 15],
["//span[text()='Overview']/following::p[descendant-or-self::text()]"]},
[]
]}
]}
]}
]
]}
]}
]
]}
</code></pre>
<p>It’s interesting to note how the pipe operator (<code>|></code>) is preserved in the abstract representation. You may also note, that the line numbers are preserved in each tuple’s metadata. If you are paying close attention to those line number you will notice that Line 7, <code>|> parse_body</code> appears first, and encloses lines 6 and 5. That gives you a good sense of how the pipe operator is passing arguments to functions. </p>
<p>We can also move the opposite direction with <code>Macro.to_string</code>.</p>
<pre><code class="elixir"> iex(1)> Macro.to_string(ast)
"defmodule(Examples.HttpGetter) do\n import(SweetXml)\n def(get) do\n HTTPoison.start()\n HTTPoison.get!(\"https://en.wikipedia.org/wiki/Prospect_Park_(Brooklyn)\") |> body |> parse_body\n end\n def(body(res)) do\n res.body()\n end\n def(parse_body(body)) do\n body |> xpath(~x\"//span[text()='Overview']/following::p[descendant-or-self::text()]\")\n end\nend"
</code></pre>
<p>We can also turn the AST back into code.</p>
<pre><code class="elixir"> Code.eval_quoted(ast)
{{:module, Examples.HttpGetter,
<<70, 79, 82, 49, 0, 0, 7, 4, 66, 69, 65, 77, 65, 116, 85, 56, 0, 0, 0, 224,
0, 0, 0, 23, 26, 69, 108, 105, 120, 105, 114, 46, 69, 120, 97, 109, 112,
108, 101, 115, 46, 72, 116, 116, 112, 71, ...>>, {:parse_body, 1}}, []}
</code></pre>
<p>This evaluates the code, which is a module, and loads it in memory. At this point you could call <code>Examples.HttpGetter.get()</code> and it would work as expected. </p>
<p>Moving back to the AST, since it is a regular Elixir data structure, it can be parsed and manipulate by your own code, which can be very powerful. Specifically you can write a parser that walks the tree and uses pattern matching to pluck specific chunks of code and manipulate or evaluate them. You can see an example of this powerful technique in <a href="https://github.com/rrrene/credo/blob/v0.8.10/lib/credo/code.ex#L68">here</a> in Credo, which is a static code analysis tool for the Elixir.</p>
<p>Of course, this barely scratches the surface, but it should get you started. To learn more about macros, I highly recommend checking out Chris McCord’s book <a href="https://pragprog.com/book/cmelixir/metaprogramming-elixir">Metaprogramming Elixir</a>. The official <a href="https://elixir-lang.org/getting-started/meta/macros.html">docs</a> and <a href="https://elixirschool.com/en/lessons/advanced/metaprogramming/">Elixir School</a> also have nice articles.</p>
<p>Thanks for reading, and as always, if you have any questions or comments, feel free to reach out to me!</p>
HTTP/2 Today with Phoenixhttps://by-cha.se/http-2-today-with-phoenix.html2018-01-28T20:58:00-05:002019-05-12T16:25:36-04:00Chase Gilliam<p>As you may know, the <a href="http://www.ietf.org/">IETF</a>’s <a href="https://httpwg.github.io/">HTTP Working Group</a> has released a new version of the HTTP standard, <a href="https://http2.github.io/">HTTP/2</a>. The new standard is binary, fully multiplexed, and supports server push. The standard was approved in February of 2015, and now <a href="https://caniuse.com/#feat=http2">almost all</a> modern browsers support it, so you should be able to use it for new projects that don’t target IE versions lower than 11. Unfortunately outside of NGINX, and some CDNs server side support has been lagging in many language ecosystems. However, the master branches of Cowboy2 and Plug have supported the standard since November of 2017. It requires a bit of effort, but you can get started with HTTP/2 in a new Phoenix app today.</p>
<p>Back in December <a href="https://maartenvanvliet.nl/">Maarten Van Vliet</a> posted an nice <a href="https://maartenvanvliet.nl/2017/12/15/upgrading_phoenix_to_http2/">article</a> describing how to do the minimal setup for a new app. I’ll be expanding on that here and explaining how to use Webpack to split you assets to take advantage of HTTP/2 multiplexing.</p>
<h2>Getting Started</h2>
<p>Let’s quickly start a new phoenix project.</p>
<pre><code class="bash"> mix phx.new --no-brunch --no-ecto http_2_today
</code></pre>
<p>We’ll be omitting ecto for simplicity, and brunch so that we can add Webpack. We’ll be using Webpack, because Brunch doesn’t do code splitting, which is useful for creating a number of small files which can be pushed to the client in parallel. Webpack also allows async loading, which can be useful for grabbing assets as you need them and can be combined with HTTP/2 in interesting ways.</p>
<p>Next, lets update our <code>mix.exs</code> file to use the versions of Cowboy, Phoenix, and Plug that support HTTP/2.</p>
<pre><code class="elixir"> defp deps do
[
{:phoenix, git: "https://github.com/phoenixframework/phoenix", branch: "master", override: true},
{:plug, "1.5.0-rc.1", override: true},
{:phoenix_pubsub, "~> 1.0"},
{:phoenix_html, "~> 2.10"},
{:phoenix_live_reload, "~> 1.0", only: :dev},
{:gettext, "~> 0.11"},
{:cowboy, "~> 2.1", override: true},
]
end
</code></pre>
<p>Plug 1.5 should be out soon, and for the moment you can use 1.5.0-rc.1. Cowboy 2.1 is stable and simple require overriding the Phoenix default as with Plug. Phoenix is targeting support with updated defaults in 1.4 and there is no release candidate at the time of writing this post, so you’ll need to target the master branch for now.</p>
<p>Run <code>mix deps.get</code> and then check to make sure that <code>mix phx.server</code> works. Everything should be running and ok at this point.</p>
<p>Now let’s quickly set up webpack and add some simple JavaScript and CSS. I’ll assume you have yarn installed, and are loosely familiar with it, and if not check their <a href="https://yarnpkg.com">site</a> for details.</p>
<p>First create an <code>assets/</code> folder at the top level of your project. Then move to that directory and begin adding Webpack.</p>
<pre><code class="bash"> mkdir assets
cd assets
yarn add webpack webpack-dev-server --dev
yarn add phoenix
</code></pre>
<p>This will create a <code>package.json</code> file, a <code>yarn.lock</code> file, and a <code>node_modules/</code> directory. Now let’s add a few more dependencies relates to ES6 transformation and handling Sass. </p>
<pre><code class="bash"> yarn add babel-core babel-loader babel-preset-env css-loader extract-text-webpack-plugin node-sass sass-loader style-loader --dev
</code></pre>
<p>If you’re coming form brunch, of Phoenix without a front-end build tool, this looks like a lot of impenetrable stuff, but it all boils down to turning new JavaScript features and Sass into something the majority of browsers can handle.</p>
<h2>Webpack Config & Assets</h2>
<p>Now let’s create a simple(ish) webpack config file that will get us started.</p>
<pre><code> touch webpack.config.js
</code></pre>
<p>Next we’ll work on a config that will process top level files in <code>/js</code> and <code>/css</code> as well as splitting out <code>phoenix_html</code> lib into a vendor bundle. Vendoring is a great way to take advantage of caching and our case multiplexing. I won’t dwell on this too much as Webpack 4, which is in RC changes vendoring a bit and removes the <code>CommonsChunkPlugin</code>. The following setup also assumes you’ll be using some sort of jsx files, but you could easily use <code>.vue</code> or something else.</p>
<pre><code class="javascript"> const webpack = require("webpack");
const ExtractTextPlugin = require('extract-text-webpack-plugin')
const path = require('path');
module.exports = {
entry: {
'app': ['./js/app.js', './css/app.scss'],
'vendor': [
'phoenix'
]
},
output: {
path: path.resolve(__dirname, '../priv/static/js'),
filename: '[name].js'
},
devtool: 'source-map',
resolve: {
extensions: ['.js', '.jsx']
},
module: {
rules: [
{
test: /\.(sass|scss)$/,
include: /css/,
use: ExtractTextPlugin.extract({
fallback: 'style-loader',
use: [
{loader: 'css-loader'},
{loader: "sass-loader"},
]
})
},
{
test: /\.(js|jsx)$/,
exclude: /node_modules/,
use: [
'babel-loader',
],
},
],
},
resolve: {
extensions: ['.js', '.jsx'],
},
plugins: [
new ExtractTextPlugin('css/app.css'),
new webpack.optimize.CommonsChunkPlugin({name: 'vendor'})
],
};
</code></pre>
<h3>Babel</h3>
<p>Next let’s add a <code>.babelrc</code> file, so that we can use babel and store it’s config seperately from webpack.</p>
<pre><code> touch .babelrc
</code></pre>
<p>Then set it to use the <code>env</code> preset, which should be adequate for most users.</p>
<pre><code class="javascript"> {
"presets": ["env"]
}
</code></pre>
<h3>The Javascript</h3>
<p>For now, let’s just create a folder and a simple entry point.</p>
<pre><code class="bash"> mkdir js
touch js/app.js
</code></pre>
<p>Now in the js file, let’s import the Phoenix sockets code.</p>
<pre><code class="javascript"> import { Socket } from 'phoenix';
</code></pre>
<p>This is fine to start with, and will let us see if things are working as intended.</p>
<h3>Some Sass/Scss</h3>
<p>Similar to the above JavaScript, we’ll create an entry point file and I’ll be using Scss syntax for Sass.</p>
<pre><code class="bash"> mkdir css
touch css/app.scss
touch css/normalize.css
</code></pre>
<p>Go to the Normalize <a href="https://github.com/necolas/normalize.css">github page</a> and copy the latest version and paste it into your new <code>normalize.css</code> file. Then include that in you <code>app.scss</code> file.</p>
<pre><code class="sass"> @import "normalize.css";
</code></pre>
<p>This will bundle up normalize, and you can use the same pattern for your own css/scss/sass files.</p>
<h3>Configuring the Start Script</h3>
<p>This is a good time to configure a simple start script for <code>webpack-dev-server</code>. Open you <code>package.json</code> and add the following snippet.</p>
<pre><code class="javascript"> "scripts": {
"start": "webpack-dev-server --https --color --compress"
},
</code></pre>
<p>Now when you run <code>yarn start</code> inside of <code>assets/</code> you will spin up a dev server that will serve asstes over https which is necessary for HTTP/2, it will colorize it’s output, and gzip everything. If <code>yarn start</code> works, then you’re ready to jump back to the Phoenix portion of the app and configure things there.</p>
<h3>Phoenix Configuration</h3>
<p>First, we need to generate a private key and self signed certificate so that Cowboy and Phoenix can serve you application over https locally. The following is taken directly the section in <code>config/dev.exs</code> about SSL.</p>
<pre><code class="bash"> openssl req -new -newkey rsa:4096 -days 365 -nodes -x509 -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com" -keyout priv/server.key -out priv/server.pem
</code></pre>
<p>You’ll want to add the following to your <code>.gitignore</code>, as you should never store sensitive information or credentials in you source control.</p>
<pre><code> priv/server.key
priv/server.pem
</code></pre>
<p>Now let’s configure our endpoint to server our app over https and run a watcher for <code>webpack-dev-server</code>. Add the following to your <code>config/dev.exs</code>.</p>
<pre><code class="elixir"> config :http_2_today, Http2TodayWeb.Endpoint,
debug_errors: true,
code_reloader: true,
check_origin: false,
watchers: [
node: [
"node_modules/.bin/webpack-dev-server",
"--https",
"--color",
"--inline",
"--hot",
"--stdin",
"--host", "localhost",
"--port", "8080",
"--public", "localhost:8080",
"--config", "webpack.config.js",
cd: Path.expand("../assets", __DIR__)
]
],
https: [port: 4000, keyfile: "priv/server.key", certfilee: "priv/server.pem"]
</code></pre>
<p>You should now be able to run <code>mix phx.server</code> from you main directory and see Webpack output in the console.</p>
<p>At this point, we need to make sure we can include our assets in out html templates.</p>
<h4>View Functions</h4>
<p>Add the following functions to <code>lib\http_2_today_web\views\layout_view</code>.</p>
<pre><code class="elixir"> defmodule Http2TodayWeb.LayoutView do
use Http2TodayWeb, :view
def js_script_tag do
if env() == :prod do
# In production we'll just reference the file
"""
<script src="<%= static_path(@conn, "/js/vendor.js") %>"></script>
<script src="<%= static_path(@conn, "/js/app.js") %>"></script>
"""
else
# In development mode we'll load it from our webpack dev server
"""
<script src="https://localhost:8080/vendor.js"></script>
<script src="https://localhost:8080/app.js"></script>
"""
end
end
# Ditto for the css
def css_link_tag do
if env() == :prod do
"<link rel=\"stylesheet\" href=\"<%= static_path(@conn, \"/css/app.css\") %>"
else
"<link rel=\"stylesheet\" type=\"text/css\" href=\"https://localhost:8080/css/app.css\" />"
end
end
defp env do
unquote(Mix.env())
end
end
</code></pre>
<p><em>Note, this is update from the original version of this post to use <code>unquote/1</code> to evaluate the environment at compile time since Mix isn’t available in a running production app. Thanks to <a href="https://twitter.com/OvermindDL1">OvermindDL1</a> for the feedback!</em></p>
<p>This will load assets from the dev server in dev mode and server the bundled files in production. Now we can use these function in <code>lib/http_2_today_web/templates/layout/app.html.eex</code>.</p>
<pre><code class="erb"> <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<title>Hello Http2Today!</title>
<%= {:safe, css_link_tag()} %>
</head>
<body>
<div class="container">
<header class="header">
<nav role="navigation">
<ul class="nav nav-pills pull-right">
<li><a href="http://www.phoenixframework.org/docs">Get Started</a></li>
</ul>
</nav>
<span class="logo"></span>
</header>
<p class="alert alert-info" role="alert"><%= get_flash(@conn, :info) %></p>
<p class="alert alert-danger" role="alert"><%= get_flash(@conn, :error) %></p>
<main role="main">
<%= render @view_module, @view_template, assigns %>
</main>
</div> <!-- /container -->
<%= {:safe, js_script_tag()} %>
</body>
</html>
</code></pre>
<h3>Trying It Out</h3>
<p>Now, let’s put it all together and check it out in the browser. Run <code>mix phx.server</code> and visit <a href="https://localhost:4000">https://localhost:4000</a> in your favorite browser. You’ll probably have to tell the browser at this point to trust your self-signed cert, and you’ll need to visit <a href="https://localhost:8080/">https://localhost:8080/</a> and do the same for the assets host. If that worked you should be able to open the inspector, switch to the network tab, and see that everything is loading over HTTP/2.</p>
<p><img src="https://res.cloudinary.com/dbwkpvbdo/image/upload/q_auto:good/v1517201192/inspector_h2.png" alt="inspector view" /></p>
<p>Notice under protocol, all of the assets are marked h2, which is shorthand for HTTP/2.</p>
<h3>Wrap Up</h3>
<p>This should give you enough to start working with HTTP/2 and actual assets. I’ll leave it as an exercise for readers to explore pushing multiple js files to the client and combining Webpack’s lazy loading to push files on demand. In production, you will need to generate real certs and configure <code>prod.exs</code>, but that’s out of the scope of this post. As always, if you have any questions, feel free to reach out and ask me.</p>
<p>You can find the full source code <a href="https://github.com/Ch4s3/http_2_today">here</a>.</p>
Simple Intro to CSP for Railshttps://by-cha.se/simple-intro-to-csp-for-rails.html2018-01-13T16:35:00-05:002019-05-12T16:32:46-04:00Chase Gilliam<p>Security is in the new a lot recently since the disclosure of the <a href="https://meltdownattack.com/">Spectre & Meltdown</a> vulnerabilities, so I thought it might be a good time to cover a simple, but often overlooked upgrade to Rails security, CSP. While CSP isn’t relate to the headline grabbing security issues of the moment, it is important. CSP, or the HTTP <code>Content-Security-Policy</code> response header tells user-agents, browsers, which resources it is allowed to load for a page. This is useful in mitigating <a href="https://developer.mozilla.org/en-US/docs/Glossary/Cross-site_scripting">XSS</a> attacks. Note that I said mitigating, as there is no security silver bullet. I would encourage you to read up on defense in depth. <a href="http://weblog.rubyonrails.org/2017/11/27/Rails-5-2-Active-Storage-Redis-Cache-Store-HTTP2-Early-Hints-Credentials/">Rails 5.2</a> will ship with some sort of CSP headers <a href="https://github.com/rails/rails/pull/31162">dsl</a> by default, so I will keep this brief, but if you’re stuck on a lower version or won’t be upgrading ASAP, this intro will be useful.</p>
<h1>The Secure Headers Gem</h1>
<p>Some folks at Twitter built a gem called <a href="https://github.com/twitter/secureheaders">secure_headers</a> which does pretty much everything you would want a CSP gem to do. You install it with the usual <code>gem "secure_headers"</code> and configure it with an initializer at <code>config/initializers/secure_headers.rb</code>, or similar. You can find Sinatra config in the gem’s docs, there is a separate gem for other Rack apps, and they provide a <a href="https://github.com/twitter/secureheaders#similar-libraries">list</a> of similar libraries. A sample config might look like the following snippet.</p>
<pre><code class="ruby"> SecureHeaders::Configuration.default do |config|
config.cookies = {
secure: true, # mark all cookies as "Secure"
httponly: true, # mark all cookies as "HttpOnly"
}
config.x_content_type_options = "nosniff"
config.x_xss_protection = "1; mode=block"
config.csp = {
default_src: Rails.env.production? ? %w(https: 'self') : %w(http: 'self' 'unsafe-inline'),
connect_src: %w(
'self'
),
font_src: %w(
'self'
https://fonts.gstatic.com
),
img_src: %w(
'self'
https://res.cloudinary.com
),
script_src: %w(
'self'
'unsafe-inline'
https://*.cloudfront.net)
}
# Use the following if you have CSP issues locally with
# tools like webpack-dev-server
if !Rails.env.production?
config.csp[:connect_src] << "*"
end
end
</code></pre>
<p>With respect to cookies, you shouldn’t store sensitive data like passwords in cookies. The <code>Secure</code> header ensures that cookies can only be sent over HTTPS, which you should already be using in production. The <code>HttpOnly</code> header ensures that cookies can’t be read from JavaScript’s <code>Document.cookie</code> API, which will help mitigate XSS. I’m omitting <code>SameSite</code>, as it has a nice default, and is a bit tricky, but <a href="https://security.stackexchange.com/questions/168365/is-setting-same-site-attribute-of-a-cookie-to-lax-the-same-as-not-setting-the-sa">check this out</a> if you need cookies for something like Intercom. Read more about these headers <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies">here</a>. </p>
<p>Setting <code>x_content_type_options</code> to “nosniff” prevents <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types#MIME_sniffing">MIME type sniffing</a>. Preventing MIME sniffing is important if your app allows file uploads, as a malicious user could upload an image with JavaScript hidden in it. This issue mostly effects certain versions of IE.</p>
<p>Most new browsers implement some simple XSS protections that overlaps the functionality of <code>X-XSS-Protection</code>, but it’s still advisable to enable it for the benefit of users on older browsers. Read about the syntax <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection">here</a>.</p>
<p>The meat of the config deals with the core <code>Content-Security-Policy</code> header config defined by <code>config.csp = {...}</code> The <code>default-src</code> serves as a fallback for the directives that you define after it, as the name implies it is a default. The ternary used in the example forces https in production, but not in development, which is convenient, and it allows inlining scripts and styles in development which is often useful for development tools.</p>
<p>The <code>connect-src</code> directive determines which urls may be loaded by JavaScript on the page. If you are using web sockets or any scripts that connect to 3rd parties, you will need to edit this directive.</p>
<p>Loading fonts and images is pretty straightforward with respect to CSP, and the <code>font-src</code>
and <code>img-src</code> directives above demonstrate how to use google fonts and images hosted on Cloudinary. If you use similar services or a CDN, you should be ale to use this is a template.</p>
<p>JavaScript is controlled by the <code>script-src</code> directive, and it probably has the most profound implications withing the set of directives covered here. Most of a sites XSS risk will be related to how it loads and uses JavaScript. You should generally avoid using <code>unsafe-eval</code> unless you know what you’re doing. Depending upon your needs and browser target you may use <code>unsafe-inline</code> to allow inlining JavaScript in HTML. There is some risk, and you should be informed before using this setting. Check out <a href="https://stackoverflow.com/questions/8502307/chrome-18-how-to-allow-inline-scripting-with-a-content-security-policy/38554505#38554505">this</a> StackOverflow discussion for more info.</p>
<p>This is a very brief overview of how to set CSP headers for Rails < 5.2 and can be adapted to Sinatra. As with any security settings, you shouldn’t copy my snippet directly, and should do a bit more reading, however the Secure Headers gem is relatively simple to use and is a nice security upgrade after you have set up HTTPS. I’ll follow up later with an article about Rails 5.2 specifically once it is released.</p>
HTTP Requests in Rust with Reqwesthttps://by-cha.se/http-requests-in-rust-with-reqwest.html2017-12-12T22:10:00-05:002019-05-08T20:53:33-04:00Chase Gilliam<p>I have been spending some time recently learning <a href="https://www.rust-lang.org/en-US/">Rust</a>, which if you are unfamiliar is a functional systems programming language with guaranteed memory and thread safety. That’s quite a description to wrap your head around, if like me you haven’t written much C or other low level code. If you’re in the same boat and interested in Rust, the common and best bit of advice it to check out “The Book”. <a href="https://doc.rust-lang.org/stable/book/second-edition/">The Book</a> in it’s second edition will walk you through the basics and some advanced topics including writing a basic web server, which is pretty cool. The Book is probably on of the best introductory texts for a programming language I’ve ever read, though <a href="http://poignant.guide/">Why’s Poignant Guide to Ruby</a> is perhaps a close second.</p>
<p>While the book is great, I generally like to experiment and color outside of the lines while learning a new language. <a href="http://exercism.io/">Exercism</a> is also a nice place to try out a new language on constrained problems and get community feedback. Rust currently has <a href="http://exercism.io/languages/rust/exercises">77 Exercises</a> on Exercism at the time of writing this article, and so far I’m enjoying doing them in Rust. One of the other things I like to do when learning a new language is grab and parse some weird pages fro Wikipedia, which usually involves learning how to use packages, making HTTP requests, and parsing the results. Rust is a bit more concerned with types and correctness than other languages I use, so I decided to just start with the request part first.</p>
<p>After some reading over at <a href="https://www.reddit.com/r/rust/">/r/rust</a> and a search of <a href="http://doc.crates.io/">crates.io</a> I found <a href="https://github.com/seanmonstar/reqwest">Reqwest</a>, which is a higher level HTTP client built on top of <a href="https://hyper.rs/">hyper</a>. Hyper is fairly low level and is used by a number of popular Rust crates, but requires a bit more Rust knowledge and skill than I currently possess. With simplicity in mind, I’ll be demonstrating HTTP calls with Reqwest.</p>
<p>I’ll assume you already have Rust installed, or can otherwise take a moment to head over to the <a href="https://www.rust-lang.org/en-US/install.html">install page</a> and get setup. <em><a href="https://doc.rust-lang.org/book/second-edition/ch01-01-installation.html">see also</a></em></p>
<p>First create a package with cargo:</p>
<pre><code class="bash"> cargo new http_test --bin
</code></pre>
<p>This creates a new Rust project names http_test that is executable as a binary, due to the <code>--bin</code> flag. That means that once compiles, the project can be run as a stand alone piece of code. <em><a href="https://doc.rust-lang.org/book/second-edition/ch01-02-hello-world.html#creating-a-project-with-cargo">more info</a></em></p>
<p>Next, you’ll want to add Reqwest to your project. Rust uses <a href="https://github.com/toml-lang/toml">toml</a> for configuration, and dependencies are listed in <code>Cargo.toml</code>. Your <code>Cargo.toml</code> should look more or less like the following. </p>
<pre><code class="toml"> [package]
name = "http_test"
version = "0.1.0"
authors = ["Your Name <your_email@ecample.com>"]
[dependencies]
reqwest = "0.8.0"
</code></pre>
<p><em>Feel free to experiment with a newer version of Reqwest, but I’m no promising anything newer than “0.8.0” will work.</em></p>
<p>Run <code>cargo build</code> and you should see a list of dependencies being compiled ending with something like:</p>
<pre><code class="bash"> ...
...
Compiling reqwest v0.8.0
Compiling http_test v0.1.0 (file:///Users/your_name/../http_test)
Finished dev [unoptimized + debuginfo] target(s) in 52.51 secs
</code></pre>
<p>If the build step fails, then this guide may be out of date, or you Rust installation may not be complete/correct. Otherwise, you’re all set to start writing code that uses reqwest. You just need to add <code>extern crate reqwest;</code> to the top of <code>main.rs</code>, and start working on the code.</p>
<pre><code class="rust"> extern crate reqwest;
fn main() {
}
</code></pre>
<p><em>this is what we have so far</em></p>
<p>After including the crate, we need to be able to handle errors and read from IO. To do this you will nee to sue parts of the standard library that aren’t included by default. We’ll add <code>use std::io::Read;</code> for the <a href="https://doc.rust-lang.org/nightly/std/io/trait.Read.html">Read trait</a> and <code>use std::error::Error;</code> for the error trait. I’ll address those momentarily. Our code should look like the following snippet.</p>
<pre><code class="rust"> extern crate reqwest;
use std::io::Read;
use std::error::Error;
fn main() {
}
</code></pre>
<p>For this simple program, the only line in the main function will be the call to a function I’m calling run, for lack of a more inspiring name. The run function doesn’t take any arguments and returns a <a href="https://doc.rust-lang.org/nightly/std/result/enum.Result.html">Result</a> type, which can either be <code>Ok(T)</code> or <code>Err(E)</code>. This is where our use statement for <code>std::error::Error</code> comes into play. The function definition looks like the following and should return <code>Ok()</code> for now.</p>
<pre><code class="rust"> extern crate reqwest;
use std::io::Read;
use std::error::Error;
fn main() {
run();
}
fn run() -> Result<String, Box<Error>> {
Ok("Done".into())
}
</code></pre>
<p>Now that the bones are in place we can start making a request to Wikipedia, about something interesrint like the <a href="https://en.wikipedia.org/wiki/Emu_War">Emu War</a>. We’ll need to <a href="https://docs.rs/reqwest/0.8.1/reqwest/#making-a-get-request">make a get request</a> with reqwest using <code>reqwest::get</code>. The request form reqwest implements Rust’s <code>Read</code> trait, which is where <code>std::io::Read</code> comes into play. Once we get the results. we will read them and convert them into a string using <code>read_to_string()</code> <a href="https://doc.rust-lang.org/std/io/trait.Read.html#method.read_to_string">from Read</a>. At this point our run function should look like the following.</p>
<pre><code class="rust"> fn run() -> Result<String, Box<Error>> {
let mut res = reqwest::get("https://en.wikipedia.org/wiki/Emu_War")?;
let mut body = String::new();
res.read_to_string(&mut body)?;
Ok("Done".into())
}
</code></pre>
<p>This is all well and good, but you won’t see anything useful if you run this code, so let’s print the status, headers, and result body using Rust’s built in <code>println!("{}", )</code> function. The following is our finished program, and it be run with <code>cargo run</code>. You’ll get a warning about an unused result, but it should still work. The reason we have set up <code>run()</code> to return a result is so that you cna take the code and start adapting it to other uses.</p>
<pre><code class="rust"> extern crate reqwest;
use std::io::Read;
use std::error::Error;
fn main() {
run();
}
fn run() -> Result<String, Box<Error>> {
let mut res = reqwest::get("https://en.wikipedia.org/wiki/Emu_War")?;
let mut body = String::new();
res.read_to_string(&mut body)?;
println!("Status: {}", res.status());
println!("Headers:\n{}", res.headers());
println!("Body:\n{}", body);
Ok("Done".into())
}
</code></pre>
<p>I hope this has been useful and interesting, and ff you have questions, please feel free to reach out on twitter. Hopefully I’ll have time soon to follow up with a post about parsing the results.</p>