Thursday, September 20, 2012

Interpretations of the GTLC, Part 4: Even Faster

Consider the following statically-typed function. (The type \(\star\) does not occur anywhere in this function.) \[ \lambda f: \mathsf{Int}{\to}\mathsf{Int}. \; x{:}\mathsf{Int} = f(42); \mathbf{return}\,x \] We'd like the execution speed of this function to be the same as if the entire language were statically typed. That is, we don't want statically-typed parts of a gradually-typed program to pay overhead because other parts may be dynamically typed. Unfortunately, in the abstract machines that we've defined so far, there is an overhead. At the point of a function call, such as \(f(42)\) above, the machine needs to check whether \(f\) has evaluated to a closure or to a closure wrapped in a threesome. This act of checking constitutes some run-time overhead.

Taking a step back, there are two approaches that one sees in the literature regarding how a cast is applied to a function. One approach is to build a new function that casts the argument, applies the old function, and then casts the result. The reduction rule looks like this: \[ v : T_1 \to T_2 \Rightarrow T_3 \to T_4 \longrightarrow \lambda x{:}T_3. (v\,(x : T_3 \Rightarrow T_1)) : T_2 \Rightarrow T_4 \] The nice thing about this approach is that there's only one kind of value of function type, functions! So when it comes to function application, we only need one reduction rule, good old beta: \[ (\lambda x{:}T.\,e)\, v \longrightarrow [x{:=}v]e \] The other approach is to leave the cast around the function and then add a second reduction rule for applications. \[ (v_1 : T_1 \to T_2 \Rightarrow T_3 \to T_4) \, v_2 \longrightarrow (v_1\, (v_2 : T_3 \Rightarrow T_1)) : T_2 \Rightarrow T_4 \] The nice thing about this approach is that the cast around the function is easy to access and change, which we took advantage of to compress sequences of such casts. But as we've already pointed out, having two kinds of values at function type induces some run-time overhead, even in parts of the program that are statically typed.

Our solution to this conundrum is to use a hybrid representation and to take advantage of the indirection that is already present in a function call. Instead of having two kinds of values at function type, we have only one: a closure that includes an optional threesome: \[ \langle \lambda x{:}T.\, s, \rho, \tau_\bot \rangle \] When a closure is first created, there is no threesome. Later, when a closure is cast, the threesome is added. \[ V(\lambda x{:}T.\, s,\rho) = \langle \lambda x{:}T.\, s, \rho, \bot \rangle \] The one transition rule for function application passes the optional threesome as a special parameter, here named \(c\), to the function. In the case of an un-casted closure, the function ignores the \(c\) parameter. \begin{align*} (x : T_1 = e_1(e_2); s, \rho, \kappa) & \longmapsto (s', \rho'[y{:=}v_2,c{:=}\tau_\bot ], (T_1 \overset{T_1}{\Longrightarrow} T_1, (x,s,\rho)::\kappa)) \\ \text{where } & V(e_1,\rho) = \langle \lambda y{:}T_2. s', \rho', \tau_\bot \rangle \\ \text{and } & V(e_2,\rho) = v_2 \end{align*}

When an un-casted closure is cast, we build a wrapper function, similar to the first approach discussed above, but using the special variable \(c\) to refer to the threesome instead of hard-coding the cast into the wrapper function. We add \(\mathit{dom}\) and \(\mathit{cod}\) operations for accessing the parts of a function threesome. \begin{align*} \mathsf{cast}(\langle \lambda x{:}T.\,s, \rho, \bot\rangle ,\tau) &= \begin{cases} \mathbf{blame}\,\ell & \text{if } \tau = (T_1 \overset{I^p;\bot^\ell}{\Longrightarrow} T_2) \\ \langle \lambda x_1.\,s', \rho', \tau \rangle & \text{otherwise} \end{cases} \\ & \text{where } s' = (x_2 = x_1 {:} \mathit{dom}(c); \mathbf{return}\, f(x_2) : \mathit{cod}(c)) \\ & \text{and } \rho' = \{ f{:=}\langle \lambda x{:}T.\,s, \rho, \bot\rangle \} \end{align*} When a closure is cast for the second time, the casts are combined to save space. \begin{align*} \mathsf{cast}(\langle \lambda x{:}T.\,s, \rho, \tau_1\rangle ,\tau_2) &= \begin{cases} \mathbf{blame}\,\ell & \text{if } (\tau_1; \tau_2) = (T_1 \overset{I^p;\bot^\ell}{\Longrightarrow} T_2) \\ \langle \lambda x{:}T.\,s, \rho, (\tau_1; \tau_2)\rangle & \text{otherwise} \end{cases} \end{align*}

That's it. We now have a machine that doesn't perform extra dispatching at function calls. There is still a tiny bit of overhead in the form of passing the \(c\) argument. This overhead can be removed by passing the entire closure to itself (instead of passing the array of free variables and the threesome separately), and from inside the function, access the threesome from the closure.

In the following I give the complete definitions for the new abstraction machine. In addition to \(\mathit{dom}\) and \(\mathit{cod}\), we add a tail call without a cast to avoid overhead when there is no cast. \[ \begin{array}{llcl} \text{expressions} & e & ::= & k \mid x \mid \lambda x{:}T.\, s \mid \mathit{dom}(e) \mid \mathit{cod}(e) \\ \text{statements} & s & ::= & d; s \mid \mathbf{return}\,e \mid \mathbf{return}\,e(e) \mid \mathbf{return}\,e(e) : \tau \\ \text{optional threesomes} & \tau_\bot & ::= & \bot \mid \tau \\ \text{values}& v & ::= & k \mid k : \tau \mid \langle \lambda x{:}T.\, s, \rho, \tau_\bot \rangle \end{array} \] Here's the complete definition of the cast function. \begin{align*} \mathsf{cast}(v, \bot) &= v \\ \mathsf{cast}(k, \tau) &= \begin{cases} k & \text{if } \tau = B \overset{B}{\Longrightarrow} B \\ \mathbf{blame}\,\ell & \text{if } \tau = B \overset{B^p;\bot^\ell}{\Longrightarrow} T\\ k : \tau & \text{otherwise} \end{cases} \\ \mathsf{cast}(k : \tau_1, \tau_2) &= \begin{cases} k & \text{if } (\tau_1;\tau_2) = B \overset{B}{\Longrightarrow} B \\ \mathbf{blame}\,\ell & \text{if } (\tau_1;\tau_2) = B \overset{B^p;\bot^\ell}{\Longrightarrow} T\\ k : (\tau_1;\tau_2) & \text{otherwise} \end{cases} \\ \mathsf{cast}(\langle \lambda x{:}T.\,s, \rho, \bot\rangle ,\tau) &= \begin{cases} \mathbf{blame}\,\ell & \text{if } \tau = (T_1 \overset{I^p;\bot^\ell}{\Longrightarrow} T_2) \\ \langle \lambda x_1.\,s' , \{ f{:=}\langle \lambda x{:}T.\,s, \rho, \bot\rangle \}, \tau \rangle & \text{otherwise} \end{cases} \\ & \text{where } s' = (x_2 = x_1 {:} \mathit{dom}(c); \mathbf{return}\, f(x_2) : \mathit{cod}(c)) \\ \mathsf{cast}(\langle \lambda x{:}T.\,s, \rho, \tau_1\rangle ,\tau_2) &= \begin{cases} \mathbf{blame}\,\ell & \text{if } (\tau_1; \tau_2) = (T_1 \overset{I^p;\bot^\ell}{\Longrightarrow} T_2) \\ \langle \lambda x{:}T.\,s, \rho, (\tau_1; \tau_2)\rangle & \text{otherwise} \end{cases} \end{align*} Here are the updated evaluation rules. \begin{align*} V(k,\rho) &= k \\ V(x,\rho) &= \rho(x) \\ V(\lambda x{:}T.\, s,\rho) &= \langle \lambda x{:}T.\, s, \rho, \bot \rangle \\ V(\mathit{dom}(e),\rho) &= \tau_1 & \text{if } V(e,\rho) = \tau_1 \to \tau_2 \\ V(\mathit{cod}(e),\rho) &= \tau_2 & \text{if } V(e,\rho) = \tau_1 \to \tau_2 \end{align*} Lastly, here are the transition rules for the machine. \begin{align*} (x : T_1 = e_1(e_2); s, \rho, \kappa) & \longmapsto (s', \rho'[y{:=}v_2,c{:=}\tau_\bot ], (T_1 \overset{T_1}{\Longrightarrow} T_1, (x,s,\rho)::\kappa)) \\ \text{where } & V(e_1,\rho) = \langle \lambda y{:}T_2. s', \rho', \tau_\bot \rangle \\ \text{and } & V(e_2,\rho) = v_2 \\ (x = \mathit{op}(\overline{e}); s, \rho, \kappa) & \longmapsto (s, \rho[x{:=}v], \kappa) \\ \text{where }& v = \delta(\mathit{op},V(e,\rho)) \\ (x = e : \tau; s, \rho, \kappa) & \longmapsto (s, \rho[x{:=}v'], \kappa) \\ \text{where } & V(e,\rho) = v \text{ and } \mathsf{cast}(v,\tau) = v' \\ (\mathbf{return}\,e, \rho, (\tau, (x,s,\rho')::\kappa)) & \longmapsto (s, \rho'[x{:=}v'], \kappa) \\ \text{where }& V(e,\rho) = v \text{ and } \mathsf{cast}(v,\tau) = v' \\ (\mathbf{return}\,e_1(e_2), \rho,\kappa) & \longmapsto (s, \rho'[y{:=}v_2,c{:=}\tau_\bot],\kappa) \\ \text{where } & V(e_1,\rho) = \langle \lambda y{:}T. s, \rho',\tau_\bot\rangle\\ \text{and } & V(e_2,\rho) = v_2 \\ (\mathbf{return}\,e_1(e_2) : \tau_1, \rho,(\tau_2,\sigma)) & \longmapsto (s, \rho'[y{:=}v_2,c{:=}\tau_\bot], ((\tau_1; \tau_2), \sigma)) \\ \text{where } & V(e_1,\rho) = \langle \lambda y{:}T. s, \rho',\tau_\bot\rangle\\ \text{and } & V(e_2,\rho) = v_2 \\[2ex] (x = e : \tau; s, \rho, \kappa) & \longmapsto \mathbf{blame}\,\ell\\ \text{where } & V(e,\rho) = v, \mathsf{cast}(v,\tau) = \mathbf{blame}\,\ell \\ (\mathbf{return}\,e, \rho, (\tau,(x,s,\rho')::\kappa)) & \longmapsto \mathbf{blame}\,\ell \\ \text{where }& V(e,\rho) = v, \mathsf{cast}(v,\tau) = \mathbf{blame}\,\ell \end{align*}

I like how there are fewer rules and the rules are somewhat simpler compared to the previous machine. There is one last bit of overhead in statically typed code: in a normal return we have to apply the pending threesome that's on the stack. If one doesn't care about making tail-calls space efficient in the presence of casts, then this wouldn't be necessary. But I care.

No comments:

Post a Comment