<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="/assets/stylesheets/atom.xsl"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
  <link href="https://programming-journal.org/feed.xml" rel="self" type="application/atom+xml" />
  <link href="https://programming-journal.org/" rel="alternate" type="text/html" hreflang="en"/>
  <updated>2026-03-31T22:41:08+00:00</updated>
  <id>https://programming-journal.org//</id>
  <title type="html">The Art, Science, and Engineering of Programming</title>
  <subtitle>The Art, Science, and Engineering of Programming journal is a fully refereed, open access, free, electronic journal. It welcomes papers on the art of programming, broadly construed.</subtitle>
  <author>
    <name>The editors of The Art, Science, and Engineering of Programming</name>
    <email>editors@programming-journal.org</email>
  </author>
  
  
    <entry xml:lang="en">
      <title type="html">Pitfalls in VM Implementation on CHERI: Lessons from Porting CRuby</title>      
      <link href="https://programming-journal.org/2026/11/2/" rel="alternate" type="text/html" title="Pitfalls in VM Implementation on CHERI: Lessons from Porting CRuby" />
      <published>2026-02-15T00:00:00+00:00</published>
      <updated>2026-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2026%2F11%2F2</id>
      
      <author>
          <name>Liu, Hanhaotian</name>
        
      </author>
      
      <author>
          <name>Yamazaki, Tetsuro</name>
        
      </author>
      
      <author>
          <name>Ugawa, Tomoharu</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;CHERI (Capability Hardware Enhanced RISC Instructions) is a novel hardware
designed to address memory safety issues. By replacing traditional pointers with
hardware capabilities, it enhances security in modern software systems. A Virtual
Machine (VM) is one such system that can benefit from CHERI’s protection, as it
may contain latent memory vulnerabilities.&lt;/p&gt;

&lt;p&gt;However, developing and porting VMs to CHERI is a non-trivial task. There are
many subtle pitfalls from the assumptions on the undefined behaviors of the C
language made based on conventional architectures. Those assumptions conflict with CHERI’s stricter memory safety
model, causing unexpected failures.&lt;/p&gt;

&lt;p&gt;Although several prior works have discussed the process of porting VMs, they focus on the overall porting process
instead of the pitfalls for VM implementation on CHERI.
The guide for programming in CHERI exists, but it is for
general programming, not addressing VM-specific issues.&lt;/p&gt;

&lt;p&gt;We have ported CRuby to CHERI as a case study and surveyed previous works on porting VMs to CHERI.
We categorized and discussed the issues found based on their causes.&lt;/p&gt;

&lt;p&gt;In this paper, we illustrate the VM-specific pitfalls for each category.
Most of the pitfalls arise from the undefined behaviors in the C language; in particular, implementation techniques and idioms of VMs often assume behaviors of traditional architectures that are invalid on CHERI.
We also discuss workarounds for them and the impacts of those workarounds.&lt;/p&gt;

&lt;p&gt;We verified the validity of the workarounds by applying them to our CRuby port and by surveying the codebases of prior case studies.&lt;/p&gt;

&lt;p&gt;This work contributes to the body of knowledge on developing and porting VMs to CHERI and will help guide efforts toward constructing safer VMs.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Hybrid Structured Editing: Structures for Tools, Text for Users</title>      
      <link href="https://programming-journal.org/2026/11/1/" rel="alternate" type="text/html" title="Hybrid Structured Editing" />
      <published>2026-02-15T00:00:00+00:00</published>
      <updated>2026-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2026%2F11%2F1</id>
      
      <author>
          <name>Beckmann, Tom</name>
        
      </author>
      
      <author>
          <name>Thiede, Christoph</name>
        
      </author>
      
      <author>
          <name>Lincke, Jens</name>
        
      </author>
      
      <author>
          <name>Hirschfeld, Robert</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;In programming, better tools often yield better results. For that, modern programming environments offer mechanisms to allow for their extensibility. The closer those tools are to the code, the easier it is for programmers to map the information provided by a tool to the code this information is about.&lt;/p&gt;

&lt;p&gt;However, existing extension mechanisms do not facilitate the close integration of tools with textual source code. Tools must be able to track program structures across edits to appear at the right positions but the parsing step of text complicates tracking structures.&lt;/p&gt;

&lt;p&gt;We propose hybrid structured editing, an approach that supports tool builders by providing structural guarantees while providing tool users with a familiar and consistent text editing interface.&lt;/p&gt;

&lt;p&gt;Hybrid structured editing allows tool builders to declare constraints on the structure that a program must conform to and ensures their observance.&lt;/p&gt;

&lt;p&gt;We present an implementation and several case studies of tools based on hybrid structured editing to demonstrate its effectiveness.&lt;/p&gt;

&lt;p&gt;Hybrid structured editing supports the safe extension of programming environments with tools that work on a structured representation of code and provide a consistent and reliable user experience.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Efficient Selection of Type Annotations for Performance Improvement in Gradual Typing</title>      
      <link href="https://programming-journal.org/2026/11/3/" rel="alternate" type="text/html" title="Efficient Selection of Type Annotations for Performance Improvement in Gradual Typing" />
      <published>2026-02-15T00:00:00+00:00</published>
      <updated>2026-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2026%2F11%2F3</id>
      
      <author>
          <name>Li, Senxi</name>
        
      </author>
      
      <author>
          <name>Dai, Feng</name>
        
      </author>
      
      <author>
          <name>Yamazaki, Tetsuro</name>
        
      </author>
      
      <author>
          <name>Chiba, Shigeru</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;Gradual typing has gained popularity as a design choice for
integrating static and dynamic typing within a single language.
Several practical languages have adopted gradual typing to offer
programmers the flexibility to annotate their programs as
needed.
Meanwhile there is a key challenge of unexpected performance
degradation in partially typed programs. The execution speed
may significantly decrease when simply adding more type
annotations.
Prior studies have investigated strategies of selectively adding
type annotations for better performance. However, they are
restricted in substantial compilation time, which impedes the
practical usage.&lt;/p&gt;

&lt;p&gt;This paper presents a new technique to select a subset of type
annotations derived by type inference for improving the
execution performance of gradually typed programs.
The advantage of the proposal is shorter compilation time by
employing a lightweight, amortized approach.
It selects type annotations along the data flows, which
is expected to avoid expensive runtime casts caused by a value
repeatedly crossing the boundaries between untyped and typed
code.&lt;/p&gt;

&lt;p&gt;We demonstrate the applicability of our proposal, and conduct
experiments to validate its effectiveness of improving the
execution time on Reticulated Python.
Our implementation supports a Python subset to select type
annotations derived by an implemented, external type inference
engine.
Experiment results show that our proposal outperforms a naive
strategy of using all type annotations derived by type inference
among the benchmark programs.
In comparison with an existing approach, the proposal achieves
comparable execution speed and shows advantage of maintaining a
more stable compilation time of deriving and selecting type
annotations.
Our results empirically indicate that the proposed technique is
practical within Reticulated Python for mitigating the
performance bottleneck of gradually typed programs.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Evaluating LLMs in the Context of a Functional Programming Course: A Comprehensive Study</title>      
      <link href="https://programming-journal.org/2026/11/5/" rel="alternate" type="text/html" title="Evaluating LLMs in the Context of a Functional Programming Course: A Comprehensive Study" />
      <published>2026-02-15T00:00:00+00:00</published>
      <updated>2026-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2026%2F11%2F5</id>
      
      <author>
          <name>Zhang, Yihan</name>
        
      </author>
      
      <author>
          <name>Pientka, Brigitte</name>
        
      </author>
      
      <author>
          <name>Si, Xujie</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;Large-Language Models (LLMs) are changing the way learners acquire knowledge outside the classroom setting. Previous studies have shown that LLMs seem effective in generating to short and simple questions in introductory CS courses using high-resource programming languages such as Java or Python.&lt;/p&gt;

&lt;p&gt;In this paper, we evaluate the effectiveness of LLMs in the context of a low-resource programming language — OCaml, in an &lt;em&gt;educational&lt;/em&gt; setting. In particular, we built three benchmarks to comprehensively evaluate 9 state-of-the-art LLMs: 1) λCodeGen (a benchmark containing natural-language homework programming problems); 2) λRepair (a benchmark containing programs with syntax, type, and logical errors drawn from actual student submissions); 3) λExplain (a benchmark containing natural language questions regarding theoretical programming concepts). We grade each LLMs responses with respect to correctness using the OCaml compiler and an autograder. And our evaluation goes beyond common evaluation methodology by using manual grading to assess the quality of the responses.&lt;/p&gt;

&lt;p&gt;Our study shows that the top three LLMs are effective on all tasks within a typical functional programming course, although they solve much fewer homework problems in the low-resource setting compared to their success on introductory programming problems in Python and Java. The strength of LLMs lies in correcting syntax and type errors as well as generating answers to basic conceptual questions. While LLMs may not yet match dedicated language-specific tools in some areas, their convenience as a one-stop tool for multiple programming languages can outweigh the benefits of more specialized systems.&lt;/p&gt;

&lt;p&gt;We hope our benchmarks can serve multiple purposes: to assess the evolving capabilities of LLMs, to help instructors raise awareness among students about the limitations of LLM-generated solutions, and to inform programming language researchers about opportunities to integrate domain-specific reasoning into LLMs and develop more powerful code synthesis and repair tools for low-resource languages.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">JoinActors: A Modular Library for Actors with Join Patterns</title>      
      <link href="https://programming-journal.org/2026/11/4/" rel="alternate" type="text/html" title="JoinActors: A Modular Library for Actors with Join Patterns" />
      <published>2026-02-15T00:00:00+00:00</published>
      <updated>2026-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2026%2F11%2F4</id>
      
      <author>
          <name>Hussein, Ayman</name>
        
      </author>
      
      <author>
          <name>Haller, Philipp</name>
        
      </author>
      
      <author>
          <name>Karras, Ioannis</name>
        
      </author>
      
      <author>
          <name>Melgratti, Hernán</name>
        
      </author>
      
      <author>
          <name>Scalas, Alceste</name>
        
      </author>
      
      <author>
          <name>Tuosto, Emilio</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;&lt;em&gt;Join patterns&lt;/em&gt; are a high-level programming construct for message-passing
applications. They offer an intuitive and declarative approach for specifying
how concurrent and distributed components coordinate, possibly depending on
complex conditions over combinations of messages. Join patterns have inspired
many implementations — but most of them are not available as libraries: rather,
they are domain-specific languages that can be hard to integrate into
pre-existing ecosystems. Moreover, all implementations ship with a predefined
matching algorithm, which may not be optimal depending on the application
requirements. These limitations are addressed by &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;JoinActors&lt;/code&gt;, a recently
published library which integrates join patterns in the off-the-shelf Scala 3
programming language, and is designed to be modular w.r.t. the matching
algorithm in use.&lt;/p&gt;

&lt;p&gt;In this work we address the problem of designing, developing, and
evaluating a modular join pattern matching toolkit that (1) can be used as a
regular library with a developer-friendly syntax within a pre-existing
programming language, and (2) has an extensible design that supports the use and
comparison of different matching algorithms.&lt;/p&gt;

&lt;p&gt;We analyse how &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;JoinActors&lt;/code&gt; achieves goals (1) and (2) above. The
paper that introduced &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;JoinActors&lt;/code&gt; only briefly outlined its design and
implementation (as its main goal was formalising its novel &lt;em&gt;fair matching
semantics&lt;/em&gt;). In this work we present and discuss in detail an improved version
of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;JoinActors&lt;/code&gt;, focusing on its use of metaprogramming (which enables an
intuitive API resembling standard pattern matching) and on its modular design.
We show how this enables the integration of multiple matching algorithms with
different optimisations and we evaluate their performance via benchmarks
covering different workloads.&lt;/p&gt;

&lt;p&gt;We illustrate a sophisticated use of Scala 3’s metaprogramming
for the integration of an advanced concurrent programming construct within a
pre-existing language. In addition, we discuss the insights and “lessons
learned” in optimising join pattern matching, and how they are facilitated by
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;JoinActors&lt;/code&gt;’s modularity — which allows for the systematic comparison of multiple
matching algorithm implementations.&lt;/p&gt;

&lt;p&gt;We adopt the &lt;em&gt;fair join pattern matching&lt;/em&gt; semantics and the
benchmark suite from the paper that originally introduced &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;JoinActors&lt;/code&gt;. Through
extensive testing we ensure that our new optimised matching algorithms produce
exactly the same matches as the original &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;JoinActors&lt;/code&gt; library, while achieving
significantly better performance. The improved version of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;JoinActors&lt;/code&gt; is the
companion artifact of this paper.&lt;/p&gt;

&lt;p&gt;This work showcases the expressiveness, effectiveness, and
usability of join patterns for implementing complex coordination patterns in
distributed message-passing systems, within a pre-existing language. It also
demonstrates promising performance results, with significant improvements over
previous work. Besides the practical promise, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;JoinActors&lt;/code&gt;’s modular design offers
a research playground for exploring and comparing new join pattern matching
algorithms, possibly based on entirely different semantics.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Filling the Gaps of Polarity: Implementing Dependent Data and Codata Types with Implicit Arguments</title>      
      <link href="https://programming-journal.org/2025/10/19/" rel="alternate" type="text/html" title="Filling the Gaps of Polarity: Implementing Dependent Data and Codata Types with Implicit Arguments" />
      <published>2025-10-15T00:00:00+00:00</published>
      <updated>2025-10-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F19</id>
      
      <author>
          <name>Liesnikov, Bohdan</name>
        
      </author>
      
      <author>
          <name>Binder, David</name>
        
      </author>
      
      <author>
          <name>Süberkrüb, Tim</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;The expression problem describes a fundamental tradeoff between two types of extensibility: extending a type with new &lt;strong&gt;operations&lt;/strong&gt;, such as by pattern matching on an algebraic data type in functional programming, and extending a type with new &lt;strong&gt;constructors&lt;/strong&gt;, such as by adding a new object implementing an interface in object-oriented programming. Most dependently typed languages have good support for the former style through &lt;strong&gt;inductive&lt;/strong&gt; types, but support for the latter style through &lt;strong&gt;coinductive&lt;/strong&gt; types is usually much poorer. Polarity is a language that treats both kinds of types symmetrically and allows the developer to switch between type representations.However, it currently lacks several features expected of a state-of-the-art dependently typed language, such as implicit arguments. The central aim of this paper is to provide an algorithmic type system and inference algorithm for implicit arguments that respect the core symmetry of the language. Our work provides two key contributions: a complete algorithmic description of the type system backing Polarity, and a comprehensive description of a unification algorithm that covers arbitrary inductive and coinductive types. We give rules for reduction semantics, conversion checking, and a unification algorithm for pattern-matching, which are essential for a usable implementation. A work-in-progress implementation of the algorithms in this paper is available at &lt;a href=&quot;https://polarity-lang.github.io/&quot;&gt;polarity-lang.github.io&lt;/a&gt;. We expect that the comprehensive account of the unification algorithm and our design decisions can serve as a blueprint for other dependently typed languages that support inductive and coinductive types symmetrically.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">BlueScript: A Disaggregated Virtual Machine for Microcontrollers</title>      
      <link href="https://programming-journal.org/2025/10/21/" rel="alternate" type="text/html" title="BlueScript: A Disaggregated Virtual Machine for Microcontrollers" />
      <published>2025-10-15T00:00:00+00:00</published>
      <updated>2025-10-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F21</id>
      
      <author>
          <name>Mochizuki, Fumika</name>
        
      </author>
      
      <author>
          <name>Yamazaki, Tetsuro</name>
        
      </author>
      
      <author>
          <name>Chiba, Shigeru</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;Virtual machines (VMs) are highly beneficial for microcontroller development. 
In particular, interactive programming environments greatly facilitate iterative development processes, 
and higher execution speeds expand the range of applications that can be developed. 
However, due to their limited memory size, microcontroller VMs provide a limited set of features. 
Widely used VMs for microcontrollers often lack interactive responsiveness and/or high execution speed. 
While researchers have investigated offloading certain VM components to other machines,the types of components that can be offloaded are still restricted. 
In this paper, we propose a disaggregated VM that offloads as many components as possible to a host machine. 
This makes it possible to exploit the abundant memory of the host machine and its powerful processing capability to provide rich features through the VM. 
As an instance of a disaggregated VM, we design and implement a BlueScript VM. 
The BlueScript VM is a virtual machine for microcontrollers that provides an interactive development environment. 
We offload most of the components of the BlueScript VM to a host machine. 
To reduce communication overhead between the host machine and the microcontroller,&lt;br /&gt;
we employed a data structure called a shadow machine on the host machine, 
which mirrors the execution state of the microcontroller. 
Through our experiments, we confirmed that offloading components does not seriously compromise their expected benefits.&lt;br /&gt;
We assess that an offloaded incremental compiler results in faster execution speed than MicroPython and Espruino,&lt;br /&gt;
while keeping interactivity comparable with MicroPython.&lt;br /&gt;
In addition, our experiments observe that the offloaded dynamic compiler improves VM performance. 
Through this investigation, we demonstrate the feasibility of providing rich features even on VMs for memory-limited microcontrollers.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Chorex: Restartable, Language-Integrated Choreographies</title>      
      <link href="https://programming-journal.org/2025/10/20/" rel="alternate" type="text/html" title="Chorex: Restartable, Language-Integrated Choreographies" />
      <published>2025-10-15T00:00:00+00:00</published>
      <updated>2025-10-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F20</id>
      
      <author>
          <name>Wiersdorf, Ashton</name>
        
      </author>
      
      <author>
          <name>Greenman, Ben</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;We built Chorex, a language that brings choreographic programming to Elixir as a path toward robust distributed applications. Chorex is unique among choreographic languages because it tolerates failure among actors: when an actor crashes, Chorex spawns a new process, restores state using a checkpoint, and updates the network configuration for all actors. Chorex also proves that full-featured choreographies can be implemented via metaprogramming, and that doing so achieves tight integration with the host language. For example, mismatches between choreography requirements and an actor implementation are reported statically and in terms of source code rather than macro-expanded code. This paper illustrates Chorex on several examples, ranging from a higher-order bookseller to a secure remote password protocol, details its implementation, and measures the overhead of checkpointing. We conjecture that Chorex’s projection strategy, which outputs sets of stateless functions, is a viable approach for other languages to support restartable actors.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">If-T: A Benchmark for Type Narrowing</title>      
      <link href="https://programming-journal.org/2025/10/17/" rel="alternate" type="text/html" title="If-T: A Benchmark for Type Narrowing" />
      <published>2025-06-15T00:00:00+00:00</published>
      <updated>2025-06-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F17</id>
      
      <author>
          <name>Guo, Hanwen</name>
        
      </author>
      
      <author>
          <name>Greenman, Ben</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;&lt;strong&gt;Context:&lt;/strong&gt;
The design of static type systems that can validate dynamically-typed programs (&lt;strong&gt;gradually&lt;/strong&gt;) is an ongoing challenge. A key difficulty is that dynamic code rarely follows datatype-driven design. Programs instead use runtime tests to narrow down the proper usage of incoming data. Type systems for dynamic languages thus need a &lt;strong&gt;type narrowing&lt;/strong&gt; mechanism that refines the type environment along individual control paths based on dominating tests, a form of flow-sensitive typing. In order to express refinements, the type system must have some notion of sets and subsets. Since set-theoretic types are computationally and ergonomically complex, the need for type narrowing raises design questions about how to balance precision and performance.&lt;br /&gt;
&lt;strong&gt;Inquiry:&lt;/strong&gt;
To date, the design of type narrowing systems has been driven by intuition, past experience, and examples from users in various language communities. There is no standard that captures desirable and undesirable behaviors. Prior formalizations of narrowing are also significantly more complex than a standard type system, and it is unclear how the extra complexity pays off in terms of concrete examples. This paper addresses the problems through If-T, a language-agnostic &lt;strong&gt;design benchmark&lt;/strong&gt; for type narrowing that characterizes the abilities of implementations using simple programs that draw attention to fundamental questions. Unlike a traditional performance-focused benchmark, If-T measures a narrowing system’s ability to validate correct code and reject incorrect code. Unlike a test suite, systems are not required to fully conform to If-T. Deviations are acceptable provided they are justified by well-reasoned design considerations, such as compile-time performance.&lt;br /&gt;
&lt;strong&gt;Approach:&lt;/strong&gt;
If-T is guided by the literature on type narrowing, the documentation of gradual languages such as TypeScript, and experiments with typechecker implementations. We have identified a set of core technical dimensions for type narrowing. For each dimension, the benchmark contains a set of topics and (at least) two characterizing programs per topic: one that should typecheck and one that should not typecheck.&lt;br /&gt;
&lt;strong&gt;Knowledge:&lt;/strong&gt;
If-T provides a baseline to measure type narrowing systems. For researchers, it provides criteria to categorize future designs via its collection of positive and negative examples. For language designers, the benchmark demonstrates the payoff of typechecker complexity in terms of concrete examples. Designers can use the examples to decide whether supporting a particular example is worthwhile. Both the benchmark and its implementations are freely available online.&lt;br /&gt;
&lt;strong&gt;Grounding:&lt;/strong&gt;
We have implemented the benchmark for five typecheckers: TypeScript, Flow, Typed Racket, mypy, and Pyright. The results highlight important differences, such as the ability to track logical implications among program variables and typechecking for user-defined narrowing predicates.&lt;br /&gt;
&lt;strong&gt;Importance:&lt;/strong&gt;
Type narrowing is essential for gradual type systems, but the tradeoffs between systems with different complexity have been unclear. If-T clarifies these tradeoffs by illustrating the benefits and limitations of each level of complexity. With If-T as a way to assess implementations in a fair, cross-language manner, future type system designs can strive for a better balance among precision, annotation burden, and performance.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">A Type System for Data Privacy Compliance in Active Object Languages</title>      
      <link href="https://programming-journal.org/2025/10/18/" rel="alternate" type="text/html" title="A Type System for Data Privacy Compliance in Active Object Languages" />
      <published>2025-06-15T00:00:00+00:00</published>
      <updated>2025-06-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F18</id>
      
      <author>
          <name>Baramashetru, Chinmayi Prabhu</name>
        
      </author>
      
      <author>
          <name>Giannini, Paola</name>
        
      </author>
      
      <author>
          <name>Tarifa, Silvia Lizeth Tapia</name>
        
      </author>
      
      <author>
          <name>Owe, Olaf</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;Data protection laws such as GDPR aim to give users unprecedented control over their personal data. Compliance with these regulations requires systematically considering information flow and interactions among entities handling sensitive data. Privacy-by-design principles advocate embedding data protection into system architectures as a default. However, translating these abstract principles into concrete, explicit methods remains a significant challenge. This paper addresses this gap by proposing a language-based approach to privacy integration, combining static and runtime techniques. By employing type checking and type inference in an active object language, the framework enables the tracking of authorised data flows and the automatic generation of constraints checked at runtime based on user consent. This ensures that personal data is processed in compliance with GDPR constraints. The key contribution of this work is a type system that gather the compliance checks and the changes to users consent and integrates data privacy compliance verification into system execution. The paper demonstrates the feasibility of this approach through a soundness proof and several examples, illustrating how the proposed language addresses common GDPR requirements, such as user consent, purpose limitation, and data subject rights. This work advances the state of the art in privacy-aware system design by offering a systematic and automated method for integrating GDPR compliance into programming languages. This capability has implications for building trustworthy systems in domains such as healthcare or finance, where data privacy is crucial.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Generating Inputs for Grammar Mining using Dynamic Symbolic Execution</title>      
      <link href="https://programming-journal.org/2025/10/16/" rel="alternate" type="text/html" title="Generating Inputs for Grammar Mining using Dynamic Symbolic Execution" />
      <published>2025-06-15T00:00:00+00:00</published>
      <updated>2025-06-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F16</id>
      
      <author>
          <name>Pointner, Andreas</name>
        
      </author>
      
      <author>
          <name>Pichler, Josef</name>
        
      </author>
      
      <author>
          <name>Prähofer, Herbert</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;A vast number of software systems include components that parse and process structured input. In addition to programming languages, which are analyzed by compilers or interpreters, there are numerous components that process standardized or proprietary data formats of varying complexity. Even if such components were initially developed and tested based on a specification, such as a grammar, numerous modifications and adaptations over the course of software evolution can make it impossible to precisely determine which inputs they actually accept.&lt;br /&gt;
In this situation, grammar mining can be used to reconstruct the specification in the form of a grammar. Established approaches already produce useful results, provided that sufficient input data is available to fully cover the input language. However, achieving this completeness is a major challenge. In practice, only input data recorded during the operation of the software systems is available. If this data is used for grammar mining, the resulting grammar reflects only the actual processed inputs but not the complete grammar of the input language accepted by the software component. As a result, edge cases or previously supported features that no longer appear in the available input data are missing from the generated grammar.&lt;br /&gt;
This work addresses this challenge by introducing a novel approach for the automatic generation of inputs for grammar mining. Although input generators have already been used for fuzz testing, it remains unclear whether they are also suitable for grammar miners. Building on the grammar miner Mimid, this work presents a fully automated approach to input generation. The approach leverages Dynamic Symbolic Execution (DSE) and extends it with two mechanisms to overcome the limitations of DSE regarding structured input parsers. First, the search for new inputs is guided by an iterative expansion that starts with a single-character input and gradually extends it. Second, input generation is structured into a novel three-phase approach, which separates the generation of inputs for parser functions.&lt;br /&gt;
The proposed method was evaluated against a diverse set of eleven benchmark applications from the existing literature. Results demonstrate that the approach achieves precision and recall for extracted grammars close to those derived from state-of-the-art grammar miners such as Mimid. Notably, it successfully uncovers subtle features and edge cases in parsers that are typically missed by such grammar miners. The effectiveness of the method is supported by empirical evidence, showing that it can achieve high performance in various domains without requiring prior input samples.&lt;br /&gt;
This contribution is significant for researchers and practitioners in software engineering, offering an automated, scalable, and precise solution for grammar mining. By eliminating the need for manual input generation, the approach not only reduces workload but also enhances the robustness and comprehensiveness of the extracted grammars. Following this approach, software engineers can reconstruct specification from existing (legacy) parsers.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">On the State of Coherence in the Land of Type Classes</title>      
      <link href="https://programming-journal.org/2025/10/15/" rel="alternate" type="text/html" title="On the State of Coherence in the Land of Type Classes" />
      <published>2025-02-15T00:00:00+00:00</published>
      <updated>2025-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F15</id>
      
      <author>
          <name>Racordon, Dimi</name>
        
      </author>
      
      <author>
          <name>Flesselle, Eugene</name>
        
      </author>
      
      <author>
          <name>Pham, Cao Nguyen</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;Type classes are a popular tool for implementing generic algorithms and data structures without loss of efficiency, bridging the gap between parametric and ad-hoc polymorphism. Since their initial development in Haskell, they now feature prominently in numerous other industry-ready programming languages, notably including Swift, Rust, and Scala. The success of type classes hinges in large part on the compilers’ ability to infer arguments to implicit parameters by means of a type-directed resolution. This technique, sometimes dubbed &lt;strong&gt;implicit programming&lt;/strong&gt;, lets users elide information that the language implementation can deduce from the context, such as the implementation of a particular type class.&lt;/p&gt;

&lt;p&gt;One drawback of implicit programming is that a type-directed resolution may yield ambiguous results, thereby threatening coherence, the property that valid programs have exactly one meaning. This issue has divided the community on the right approach to address it. One side advocates for flexibility where implicit resolution is context-sensitive and often relies on dependent typing features to uphold soundness. The other holds that context should not stand in the way of equational reasoning and typically imposes that type class instances be unique across the entire program to fend off ambiguities.&lt;/p&gt;

&lt;p&gt;Although there exists a large body of work on type classes and implicit programming, most of the scholarly literature focuses on a few select languages and offers little insight into other mainstream projects. Meanwhile, the latter have evolved similar features and/or restrictions under different names, making it difficult for language users and designers to get a sense of the full design space. To alleviate this issue, we set to examine Swift, Rust, and Scala, three popular languages featuring type classes heavily, and relate their approach to coherence to Haskell’s. It turns out that, beyond superficial syntactic differences, Swift, Rust, and Haskell are actually strikingly similar in that the three languages offer comparable strategies to work around the limitations of the uniqueness of type class instances.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Two Approaches for Programming Education in the Domain of Graphics: An Experiment</title>      
      <link href="https://programming-journal.org/2025/10/14/" rel="alternate" type="text/html" title="Two Approaches for Programming Education in the Domain of Graphics: An Experiment" />
      <published>2025-02-15T00:00:00+00:00</published>
      <updated>2025-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F14</id>
      
      <author>
          <name>Chiodini, Luca</name>
        
      </author>
      
      <author>
          <name>Sorva, Juha</name>
        
      </author>
      
      <author>
          <name>Hellas, Arto</name>
        
      </author>
      
      <author>
          <name>Seppälä, Otto</name>
        
      </author>
      
      <author>
          <name>Hauswirth, Matthias</name>
        
      </author>
      
      
        <summary type="html">&lt;h4 id=&quot;context&quot;&gt;Context&lt;/h4&gt;
&lt;p&gt;Graphics is a popular domain for teaching introductory programming in a motivating way, even in text-based programming languages.
Over the last few decades, a large number of libraries using different approaches have been developed for this purpose.&lt;/p&gt;

&lt;h4 id=&quot;inquiry&quot;&gt;Inquiry&lt;/h4&gt;

&lt;p&gt;Prior work in introductory programming that uses graphics as input and output has shown positive results in terms of engagement,
but research is scarce on whether learners are able to use programming concepts learned through graphics for programming in other domains,
transferring what they have learned.&lt;/p&gt;

&lt;h4 id=&quot;approach&quot;&gt;Approach&lt;/h4&gt;

&lt;p&gt;We conducted a randomized, controlled experiment with 145 students as participants divided into two groups.
Both groups programmed using graphics in Python, but used different approaches:
one group used a compositional graphics library named PyTamaro; the other used the Turtle graphics library from Python’s standard library.
Student engagement was assessed with surveys, and programming knowledge with a post-test on
general programming concepts and programming tasks in the domain of graphics.&lt;/p&gt;

&lt;h4 id=&quot;knowledge&quot;&gt;Knowledge.&lt;/h4&gt;

&lt;p&gt;We find few differences between the two groups on the post-test,
despite the PyTamaro group having practiced on problems isomorphic to those in the post-test.
The participants traced a compositional graphics program more accurately than a ‘comparable’ turtle graphics program.
Both groups report high engagement and perceived learning; both perform well on simple program-writing tasks to create graphics.&lt;/p&gt;

&lt;h4 id=&quot;grounding&quot;&gt;Grounding.&lt;/h4&gt;

&lt;p&gt;Our findings are based on a controlled experiment with a count of 145 participants, which exceeds the sample size indicated by power analysis to detect a medium effect size.
The complete instrument and teaching materials used in the study are available as appendixes.&lt;/p&gt;

&lt;h4 id=&quot;importance&quot;&gt;Importance.&lt;/h4&gt;

&lt;p&gt;This study adds further evidence that graphics is an engaging domain for introductory programming;
moreover, it shows that the compositional graphics approach adopted by PyTamaro yields engagement levels comparable to the venerable turtle approach.
Compositional graphics code appears to be easier to trace than turtle graphics code.
As for conceptual knowledge, our results indicate that practicing on programming tasks isomorphic to those of the test can still not be enough to achieve better transfer.
This challenges programming educators and researchers to investigate further which graphics-based approaches work best and how to facilitate transfer.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">PolyDebug: A Framework for Polyglot Debugging</title>      
      <link href="https://programming-journal.org/2025/10/13/" rel="alternate" type="text/html" title="PolyDebug: A Framework for Polyglot Debugging" />
      <published>2025-02-15T00:00:00+00:00</published>
      <updated>2025-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F13</id>
      
      <author>
          <name>Houdaille, Philémon</name>
        
      </author>
      
      <author>
          <name>Khelladi, Djamel Eddine</name>
        
      </author>
      
      <author>
          <name>Combemale, Benoit</name>
        
      </author>
      
      <author>
          <name>Mussbacher, Gunter</name>
        
      </author>
      
      <author>
          <name>van der Storm, Tijs</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;As software grows increasingly complex, the quantity and diversity of concerns to be addressed also rises. To answer this diversity of concerns, developers may end up using multiple programming languages in a single software project, a practice known as polyglot programming. This practice has gained momentum with the rise of execution platforms capable of supporting polyglot systems.&lt;/p&gt;

&lt;p&gt;However, despite this momentum, there is a notable lack of development tooling support for developers working on polyglot programs, such as in debugging facilities. Not all polyglot execution platforms provide debugging capabilities, and for those that do, implementing support for new languages can be costly.&lt;/p&gt;

&lt;p&gt;This paper addresses this gap by introducing a novel debugger framework that is language-agnostic yet leverages existing language-specific debuggers. The proposed framework is dynamically extensible to accommodate the evolving combination of languages used in polyglot software development. It utilizes the Debug Adapter Protocol (DAP) to integrate and coordinate existing debuggers within a debugging session.&lt;/p&gt;

&lt;p&gt;We found that using our approach, we were able to implement polyglot debugging support for three different languages with little development effort. We also found that our debugger did not introduce an overhead significant enough to hinder debugging tasks in many scenarios; however performance did deteriorate with the amount of polyglot calls, making the approach not suitable for every polyglot program structure.&lt;/p&gt;

&lt;p&gt;The effectiveness of this approach is demonstrated through the development of a prototype, PolyDebug, and its application to use cases involving C, JavaScript, and Python. We evaluated PolyDebug on a dataset of traditional benchmark programs, modified to fit our criteria of polyglot programs. We also assessed the development effort by measuring the source lines of code (SLOC) for the prototype as a whole as well as its components.&lt;/p&gt;

&lt;p&gt;Debugging is a fundamental part of developing and maintaining software. Lack of debug tools can lead to difficulty in locating software bugs and slow down the development process. We believe this work is relevant to help provide developers proper debugging support regardless of the runtime environment.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">An Attempt to Catch Up with JIT Compilers: The False Lead of Optimizing Inline Caches</title>      
      <link href="https://programming-journal.org/2025/10/6/" rel="alternate" type="text/html" title="An Attempt to Catch Up with JIT Compilers: The False Lead of Optimizing Inline Caches" />
      <published>2025-02-15T00:00:00+00:00</published>
      <updated>2025-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F6</id>
      
      <author>
          <name>Poirier, Aurore</name>
        
      </author>
      
      <author>
          <name>Rohou, Erven</name>
        
      </author>
      
      <author>
          <name>Serrano, Manuel</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;&lt;strong&gt;Context:&lt;/strong&gt; Just-in-Time (JIT) compilers are able to specialize the code they generate according to a continuous profiling of the running programs. This gives them an advantage when compared to Ahead-of-Time (AoT) compilers that must choose the code to generate once for all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inquiry:&lt;/strong&gt; Is it possible to improve the performance of AoT compilers by adding Dynamic Binary Modification (DBM) to the executions?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Approach:&lt;/strong&gt; We added to the Hopc AoT JavaScript compiler a new optimization based on DBM to the inline cache (IC), a classical optimization dynamic languages use to implement object property accesses efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Knowledge:&lt;/strong&gt; Reducing the number of memory accesses as the new optimization does, does not shorten execution times on contemporary architectures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grounding:&lt;/strong&gt; The DBM optimization we have implemented is fully operational on x86_64 architectures. We have conducted several experiments to evaluate its impact on performance and to study the reasons of the lack of acceleration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Importance:&lt;/strong&gt; The (negative) result we present in this paper sheds new light on the best strategy to be used to implement dynamic languages. It tells that the old days were removing instructions or removing memory reads always yielded to speed up is over. Nowadays, implementing sophisticated compiler optimizations is only worth the effort if the processor is not able by itself to accelerate the code. This result applies to AoT compilers as well as JIT compilers.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Conversational Concurrency with Dataspaces and Facets</title>      
      <link href="https://programming-journal.org/2025/10/2/" rel="alternate" type="text/html" title="Conversational Concurrency with Dataspaces and Facets" />
      <published>2025-02-15T00:00:00+00:00</published>
      <updated>2025-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F2</id>
      
      <author>
          <name>Caldwell, Sam</name>
        
      </author>
      
      <author>
          <name>Garnock-Jones, Tony</name>
        
      </author>
      
      <author>
          <name>Felleisen, Matthias</name>
        
      </author>
      
      
        <summary type="html">&lt;h4 id=&quot;context&quot;&gt;Context&lt;/h4&gt;

&lt;p&gt;Developers have come to appreciate the simplicity of message-passing actors for
concurrent programming tasks. The actor model of computation is easy to grasp;
it is just a conversation among actors with a common goal. Importantly, it
eliminates some basic pitfalls of the dominant shared-memory model, most
critically data races.&lt;/p&gt;

&lt;h4 id=&quot;inquiry&quot;&gt;Inquiry&lt;/h4&gt;

&lt;p&gt;A close look at real-world conversations suggests, however, that they are not
mere exchanges of messages. Participants must keep in mind conversational
context, and participants joining late can and often must acquire some of this
context. In addition, some settings call for engaging in several conversations
in parallel; in others, participants conduct temporarily limited
sub-conversations to clarify a point. Existing actor code exhibits complex
design patterns that get around the underlying limitations of the pure
message-passing model.&lt;/p&gt;

&lt;h4 id=&quot;approach&quot;&gt;Approach&lt;/h4&gt;

&lt;p&gt;These patterns suggest a number of elements involved in programming
conversational computations. Translated into terms of language design, they call
for two kinds of facilities: (1) one for sharing conversational context and (2)
another one for organizing individual actors around on-going conversations and their
contexts.&lt;/p&gt;

&lt;h4 id=&quot;knowledge&quot;&gt;Knowledge&lt;/h4&gt;

&lt;p&gt;This paper presents Syndicate, a language designed to directly support the
programming of conversing actors. Beyond message passing, it supplies
(1) a dataspace, which allows actors to make public assertions, to withdraw
them, and to query what other actors have asserted; and (2) the facet notation,
which enables programmers to express individual actors as a reflection of the
on-going conversations.&lt;/p&gt;

&lt;h4 id=&quot;grounding&quot;&gt;Grounding&lt;/h4&gt;

&lt;p&gt;A worked example introduces these concepts and illustrates conversational
programming in Syndicate. A comparison with other research and industrial
concurrent languages demonstrates the unique support Syndicate provides.&lt;/p&gt;

&lt;h4 id=&quot;importance&quot;&gt;Importance&lt;/h4&gt;

&lt;p&gt;Syndicate advances concurrent actor programming with enhancements that address
some observed limitations of the underlying model. While message-passing
simplifies concurrent programming, it falls short in handling the complexities
of actual computational conversations. By introducing a dataspace actor for
sharing conversational context and the facet notation for organizing actors
around ongoing conversations, Syndicate enables developers to naturally express
and manage the nuanced interactions often required in concurrent systems. These
innovations reduce the need for complex design patterns and provide unique
support for building robust, context-aware concurrent applications.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Consistent Distributed Reactive Programming with Retroactive Computation</title>      
      <link href="https://programming-journal.org/2025/10/11/" rel="alternate" type="text/html" title="Consistent Distributed Reactive Programming with Retroactive Computation" />
      <published>2025-02-15T00:00:00+00:00</published>
      <updated>2025-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F11</id>
      
      <author>
          <name>Kamina, Tetsuo</name>
        
      </author>
      
      <author>
          <name>Aotani, Tomoyuki</name>
        
      </author>
      
      <author>
          <name>Masuhara, Hidehiko</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;&lt;strong&gt;Context:&lt;/strong&gt; Many systems require receiving data from multiple information sources, which act as distributed network devices that asynchronously send the latest data at their own pace to generalize various kinds of devices and connections, known as the Internet of Things (IoT). These systems often perform computations both &lt;strong&gt;reactively&lt;/strong&gt; and &lt;strong&gt;retroactively&lt;/strong&gt; on information received from the sources for monitoring and analytical purposes, respectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inquiry:&lt;/strong&gt; It is challenging to design a programming language that can describe such systems at a high level of abstraction for two reasons: (1) reactive and retroactive computations in these systems are performed alongside the execution of other application logic; and (2) information sources may be distributed, and data from these sources may arrive late or be lost entirely. Addressing these difficulties is our fundamental problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Approach:&lt;/strong&gt; We propose a programming language that supports the following features. First, our language incorporates reactive time-varying values (also known as signals) embedded within an imperative language. Second, it supports multiple information sources that are distributed and represented as signals, meaning they can be declaratively composed to form other time-varying values. Finally, it allows computation over past values collected from information sources and recovery from inconsistency caused by packet loss. To address the aforementioned difficulties, we develop a core calculus for this proposed language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Knowledge:&lt;/strong&gt; This calculus is a hybrid of reactive/retroactive computations and imperative ones. Because of this hybrid nature, the calculus is inherently complex; however, we have simplified it as much as possible. First, its semantics are modeled as a simple, single-threaded abstraction based on typeless object calculus. Meanwhile, reactive computations that execute in parallel are modeled using a simple process calculus and are integrated with the object calculus, ensuring that the computation results are always serialized. Specifically, we show that time consistency is guaranteed in the calculus; in other words, consistency can be recovered at any checkpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grounding:&lt;/strong&gt; This work is supported by formally stating and proving theorems regarding time consistency. We also conducted a microbenchmarking experiment to demonstrate that the implemented recovery process is feasible in our assumed application scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Importance:&lt;/strong&gt; The ensured time consistency provides a rigorous foundation for performing analytics on computation results obtained from distributed information sources, even when these sources experience delays or packet loss.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Evolution Language Framework for Persistent Objects</title>      
      <link href="https://programming-journal.org/2025/10/12/" rel="alternate" type="text/html" title="Evolution Language Framework for Persistent Objects" />
      <published>2025-02-15T00:00:00+00:00</published>
      <updated>2025-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F12</id>
      
      <author>
          <name>Kamina, Tetsuo</name>
        
      </author>
      
      <author>
          <name>Aotani, Tomoyuki</name>
        
      </author>
      
      <author>
          <name>Masuhara, Hidehiko</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;&lt;strong&gt;Context:&lt;/strong&gt; Multi-schema-version data management (MSVDM) is the database technology that simultaneously supports multiple schema versions of one database. With the technology, multiple versions of one software system can co-exist and exchange data even when the system’s data structure evolves along with versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inquiry:&lt;/strong&gt;
While there have been developed MSVDM theories and implementations for relational databases, they are not directly applicable to persistent objects. Since persistent objects are commonly implemented by means of object-relational mapping (OR-mapping), we need a right level of abstraction in order to describe evolution of data structures and translate data accesses in between different versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Approach:&lt;/strong&gt;
We propose a new evolution language consisting of a set of evolution operations, each denoting a modification of the source code and implicitly defining the corresponding modification to the database schema. Given the existence of multiple mapping mechanisms from persistent objects to databases, we designed the evolution language at two levels. At the abstract level, it handles scenarios such as refactoring and adding classes and fields. At the concrete level, we provide definitions for different mapping mechanisms separately, leveraging the existing database evolution language that supports MSVDM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Knowledge:&lt;/strong&gt;
Our evolution language is designed to support existing evolution operations proposed in prior work. Additionally, it introduces support for operations related to class hierarchy changes, which are not covered by previous approaches. Using our proposal, two concrete mapping mechanisms, namely, a JPA-like mapping and signal classes, can be provided separately. Furthermore, our evolution language preserves program behavior and covers common evolution operations in practice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grounding:&lt;/strong&gt;
This work is supported by the formal definition of both the target abstract core language and the proposed evolution language, the formulation of several theorems demonstrating the soundness of our proposals, and the proofs of these theorems.
Additionally, an empirical study was conducted to investigate the evolution histories of three open-source projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Importance:&lt;/strong&gt;
To the best of our knowledge, our proposal is the first evolution language for persistent objects that supports MSVDM. Moreover, it is the first evolution language defined at an abstract level. By defining mappings separately, we can apply it to a wide range of persistent object mechanisms built on top of SQL.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Study of the Use of Property Probes in an Educational Setting</title>      
      <link href="https://programming-journal.org/2025/10/10/" rel="alternate" type="text/html" title="Study of the Use of Property Probes in an Educational Setting" />
      <published>2025-02-15T00:00:00+00:00</published>
      <updated>2025-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F10</id>
      
      <author>
          <name>Risberg Alaküla, Anton</name>
        
      </author>
      
      <author>
          <name>Fors, Niklas</name>
        
      </author>
      
      <author>
          <name>Söderberg, Emma</name>
        
      </author>
      
      
        <summary type="html">&lt;h4 id=&quot;context&quot;&gt;Context&lt;/h4&gt;
&lt;p&gt;Developing compilers and static analysis tools (“language tools”) is a difficult and time-consuming task.
We have previously presented &lt;em&gt;property probes&lt;/em&gt;, a technique to help the language tool developer build understanding of their tool.
A probe presents a live view into the internals of the compiler, enabling the developer to see all the intermediate steps of a compilation or analysis rather than just the final output.
This technique has been realized in a tool called CodeProber.&lt;/p&gt;

&lt;h4 id=&quot;inquiry&quot;&gt;Inquiry&lt;/h4&gt;

&lt;p&gt;CodeProber has been in active use in both research and education for over two years, but its practical use has not been well studied.
CodeProber combines liveness, AST exploration and presenting program analysis results on top of source code.
While there are other tools that specifically target language tool developers, we are not aware of any that has the same design as CodeProber, much less any such tool with an extensive user study.
We therefore claim there is a lack of knowledge how property probes (and by extension CodeProber) are used in practice.&lt;/p&gt;

&lt;h4 id=&quot;approach&quot;&gt;Approach&lt;/h4&gt;

&lt;p&gt;We present the results from a mixed-method study of use of CodeProber in an educational setting, with the goal to discover if and how property probes help, and how they compare to more traditional techniques such as test cases and print debugging.
In the study, we analyzed data from 11 in-person interviews with students using CodeProber as part of a course on program analysis.
We also analyzed CodeProber event logs from 24 students in the same course, and 51 anonymized survey responses across two courses where CodeProber was used.&lt;/p&gt;

&lt;h4 id=&quot;knowledge&quot;&gt;Knowledge&lt;/h4&gt;

&lt;p&gt;Our findings show that the students find CodeProber to be useful, and they make continuous use of it during the course labs.
We further find that the students in our study seem to partially or fully use CodeProber instead of other development tools and techniques, e.g. breakpoint/step-debugging, test cases and print debugging.&lt;/p&gt;

&lt;h4 id=&quot;grounding&quot;&gt;Grounding&lt;/h4&gt;

&lt;p&gt;Our claims are supported by three different data sources: 11 in-person interviews, log analysis from 24 students, and surveys with 51 responses.&lt;/p&gt;

&lt;h4 id=&quot;importance&quot;&gt;Importance&lt;/h4&gt;

&lt;p&gt;We hope our findings inspire others to consider live exploration to help language tool developers build understanding of their tool.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Monk: Opportunistic Scheduling to Delay Horizontal Scaling</title>      
      <link href="https://programming-journal.org/2025/10/1/" rel="alternate" type="text/html" title="Monk: Opportunistic Scheduling to Delay Horizontal Scaling" />
      <published>2025-02-15T00:00:00+00:00</published>
      <updated>2025-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F1</id>
      
      <author>
          <name>Shimchenko, Marina</name>
        
      </author>
      
      <author>
          <name>Österlund, Erik</name>
        
      </author>
      
      <author>
          <name>Wrigstad, Tobias</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;In modern server computing, efficient CPU resource usage is often
traded for latency. Garbage collection is a key aspect of memory
management in programming languages like Java, but it often
competes with application threads for CPU time, leading to delays
in processing requests and consequent increases in latency. This
work explores if opportunistic scheduling in ZGC, a fully
concurrent garbage collector (GC), can reduce application latency
on middle-range CPU utilization, a topical deployment, and
potentially delay horizontal scaling. We implemented an
opportunistic scheduling that schedules GC threads during periods
when CPU resources would otherwise be idle. This method
prioritizes application threads over GC workers when it matters
most, allowing the system to handle higher workloads without
increasing latency. Our findings show that this technique can
significantly improve performance in server applications. For
example, in tests using the SPECjbb2015 benchmark, we observed up
to a 15% increase in the number of requests processed within the
target 25ms latency. Additionally, applications like
Hazelcast showed a mean latency reduction of up to 40% compared
to ZGC without opportunistic scheduling. The feasibility and
effectiveness of this approach were validated through empirical
testing on two widely used benchmarks, showing that the method
consistently improves performance under various workloads. This
work is significant because it addresses a common bottleneck in
server performance—how to manage GC without degrading application
responsiveness. By improving how GC threads are scheduled, this
research offers a pathway to more efficient resource usage,
enabling higher performance and better scalability in server
applications.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Automated Profile-Guided Replacement of Data Structures to Reduce Memory Allocation</title>      
      <link href="https://programming-journal.org/2025/10/3/" rel="alternate" type="text/html" title="Automated Profile-Guided Replacement of Data Structures to Reduce Memory Allocation" />
      <published>2025-02-15T00:00:00+00:00</published>
      <updated>2025-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F3</id>
      
      <author>
          <name>Makor, Lukas</name>
        
      </author>
      
      <author>
          <name>Kloibhofer, Sebastian</name>
        
      </author>
      
      <author>
          <name>Hofer, Peter</name>
        
      </author>
      
      <author>
          <name>Leopoldseder, David</name>
        
      </author>
      
      <author>
          <name>Mössenböck, Hanspeter</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;Data structures are a cornerstone of most modern programming languages. Whether they are provided via separate libraries, built into the language specification, or as part of the language’s standard library - data structures such as lists, maps, sets, or arrays provide programmers with a large repertoire of tools to deal with data. 
Moreover, each kind of data structure typically comes with a variety of implementations that focus on scalability, memory efficiency, performance, thread-safety, or similar aspects.&lt;/p&gt;

&lt;p&gt;Choosing the &lt;em&gt;right&lt;/em&gt; data structure for a particular use case can be difficult or even impossible if the data structure is part of a framework over which the user has no control. It typically requires in-depth knowledge about the program and, in particular, about the usage of the data structure in question. 
However, it is usually not feasible for developers to obtain such information about programs in advance. 
Hence, it makes sense to look for a more automated way for optimizing data structures.&lt;/p&gt;

&lt;p&gt;We present an approach to automatically replace data structures in Java applications. 
We use profiling to determine allocation-site-specific metrics about data structures and their usages, and then automatically replace their allocations with customized versions, focusing on memory efficiency. 
Our approach is integrated into GraalVM Native Image, an Ahead-of-Time compiler for Java applications.&lt;/p&gt;

&lt;p&gt;By analyzing the generated data structure profiles, we show how standard benchmarks and microservice-based applications use data structures and demonstrate the impact of customized data structures on the memory usage of applications.&lt;/p&gt;

&lt;p&gt;We conducted an evaluation on standard and microservice-based benchmarks, in which the memory usage was reduced by up to 13.85 % in benchmarks that make heavy use of data structures. While others are only slightly affected, we could still reduce the average memory usage by 1.63 % in standard benchmarks and by 2.94 % in microservice-based benchmarks.&lt;/p&gt;

&lt;p&gt;We argue that our work demonstrates that choosing appropriate data structures can reduce the memory usage of applications. While acknowledge that our approach does not provide benefits for all kinds of workloads, our work nevertheless shows how automated profiling and replacement can be used to optimize data structures in general.
Hence, we argue that our work could pave the way for future optimizations of data structures.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Probing the Design Space: Parallel Versions for Exploratory Programming</title>      
      <link href="https://programming-journal.org/2025/10/5/" rel="alternate" type="text/html" title="Probing the Design Space: Parallel Versions for Exploratory Programming" />
      <published>2025-02-15T00:00:00+00:00</published>
      <updated>2025-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F5</id>
      
      <author>
          <name>Beckmann, Tom</name>
        
      </author>
      
      <author>
          <name>Bergsiek, Joana</name>
        
      </author>
      
      <author>
          <name>Krebs, Eva</name>
        
      </author>
      
      <author>
          <name>Mattis, Toni</name>
        
      </author>
      
      <author>
          <name>Ramson, Stefan</name>
        
      </author>
      
      <author>
          <name>Rinard, Martin C.</name>
        
      </author>
      
      <author>
          <name>Hirschfeld, Robert</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;Exploratory programming involves open-ended tasks. To evaluate their progress on these, programmers require frequent feedback and means to tell if the feedback they observe is bringing them in the right direction. Collecting, comparing, and sharing feedback is typically done through ad-hoc means: relying on memory to compare outputs, code comments, or manual screenshots. To approach this issue, we designed Exploriants: an extension to example-based live programming. Exploriants allows programmers to place variation points. It collects outputs captured in probes and presents them in a comparison view that programmers can customize to suit their program domain. We find that the addition of variation points and the comparisons view encourages a structured approach to exploring variations of a program. We demonstrate Exploriants’ capabilities and applicability in three case studies on image processing, data processing, and game development. Given Exploriants, exploratory programmers are given a straightforward means to evaluate their progress and do not have to rely on ad-hoc methods that may introduce errors.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Dynamic Program Slices Change How Developers Diagnose Gradual Run-Time Type Errors</title>      
      <link href="https://programming-journal.org/2025/10/8/" rel="alternate" type="text/html" title="Dynamic Program Slices Change How Developers Diagnose Gradual Run-Time Type Errors" />
      <published>2025-02-15T00:00:00+00:00</published>
      <updated>2025-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F8</id>
      
      <author>
          <name>Bañados Schwerter, Felipe</name>
        
      </author>
      
      <author>
          <name>Garcia, Ronald</name>
        
      </author>
      
      <author>
          <name>Holmes, Reid</name>
        
      </author>
      
      <author>
          <name>Ali, Karim</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;A gradual type system allows developers to declare certain types to be enforced by the compiler (i.e., statically typed), while leaving other types to be enforced via runtime checks (i.e., dynamically typed). When runtime checks fail, debugging gradually typed programs becomes cumbersome, because these failures may arise far from the original point where an inconsistent type assumption is made. To ease this burden on developers, some gradually typed languages produce a blame report for a given type inconsistency. However, these reports are sometimes misleading, because they might point to program points that do not need to be changed to stop the error.&lt;/p&gt;

&lt;p&gt;To overcome the limitations of blame reports, we propose using dynamic program slicing as an alternative approach to help programmers debug run-time type errors. We describe a proof-of-concept for TypeSlicer, a tool that would present dynamic program slices to developers when a runtime check fails. We performed a Wizard-of-Oz user study to investigate how developers respond to dynamic program slices through a set of simulated interactions with TypeScript programs. This formative study shows that developers can understand and apply dynamic slice information to provide change recommendations when debugging runtime type errors.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Skitter: A Distributed Stream Processing Framework with Pluggable Distribution Strategies</title>      
      <link href="https://programming-journal.org/2025/10/4/" rel="alternate" type="text/html" title="Skitter: A Distributed Stream Processing Framework with Pluggable Distribution Strategies" />
      <published>2025-02-15T00:00:00+00:00</published>
      <updated>2025-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F4</id>
      
      <author>
          <name>Saey, Mathijs</name>
        
      </author>
      
      <author>
          <name>De Koster, Joeri</name>
        
      </author>
      
      <author>
          <name>De Meuter, Wolfgang</name>
        
      </author>
      
      
        <summary type="html">&lt;h4 id=&quot;context&quot;&gt;Context&lt;/h4&gt;
&lt;p&gt;Distributed Stream Processing Frameworks (DSPFs) are popular tools for expressing real-time Big Data applications that have to handle enormous volumes of data in real time. These frameworks distribute their applications over a cluster in order to scale horizontally along with the amount of incoming data.&lt;/p&gt;

&lt;h4 id=&quot;inquiry&quot;&gt;Inquiry&lt;/h4&gt;

&lt;p&gt;Crucial for the performance of such applications is the &lt;strong&gt;distribution strategy&lt;/strong&gt; that is used to partition data and computations over the cluster nodes.
In some DSPFs, like Apache Spark or Flink, the distribution strategy is hardwired into the framework which can lead to inefficient applications.
The other end of the spectrum is offered by Apache Storm, which offers a low-level model wherein programmers can implement their own distribution strategies on a per-application basis to improve efficiency.
However, this model conflates distribution and data processing logic, making it difficult to modify either.
As a consequence, today’s cluster application developers either have to accept the built-in distribution strategies of a high-level framework or accept the complexity of expressing a distribution strategy in Storm’s low-level model.&lt;/p&gt;

&lt;h4 id=&quot;approach&quot;&gt;Approach&lt;/h4&gt;

&lt;p&gt;We propose a novel programming model wherein data processing operations and their distribution strategies are decoupled from one another and where new strategies can be created in a modular fashion.&lt;/p&gt;

&lt;h4 id=&quot;knowledge&quot;&gt;Knowledge&lt;/h4&gt;

&lt;p&gt;The introduced language abstractions cleanly separate the data processing and distribution logic of a stream processing application.
This enables the expression of stream processing applications in a high-level framework while still retaining the flexibility offered by Storm’s low-level model.&lt;/p&gt;

&lt;h4 id=&quot;grounding&quot;&gt;Grounding&lt;/h4&gt;

&lt;p&gt;We implement our programming model as a domain-specific language, called Skitter, and use it to evaluate our approach.
Our evaluation shows that Skitter enables the implementation of existing distribution strategies from the state of the art in a modular fashion.
Our performance evaluation shows that the strategies implemented in Skitter exhibit the expected performance characteristics and that applications written in Skitter obtain throughput rates in the same order of magnitude as Storm.&lt;/p&gt;

&lt;h4 id=&quot;importance&quot;&gt;Importance&lt;/h4&gt;

&lt;p&gt;Our work enables developers to select the most performant distribution strategy for each operation in their application, while still retaining the programming model offered by high-level frameworks.&lt;/p&gt;
</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">The Formal Semantics and Implementation of a Domain-Specific Language for Mixed-Initiative Dialogs</title>      
      <link href="https://programming-journal.org/2025/10/7/" rel="alternate" type="text/html" title="The Formal Semantics and Implementation of a Domain-Specific Language for Mixed-Initiative Dialogs" />
      <published>2025-02-15T00:00:00+00:00</published>
      <updated>2025-02-15T00:00:00+00:00</updated>
      <id>urn:doi:10.22152%2Fprogramming-journal.org%2F2025%2F10%2F7</id>
      
      <author>
          <name>Rowland, Zachary S.</name>
        
      </author>
      
      <author>
          <name>Perugini, Saverio</name>
        
      </author>
      
      
        <summary type="html">&lt;p&gt;Human-computer dialog plays a prominent role in interactions conducted at kiosks (e.g., withdrawing money from an ATM or filling your car with gas), on smartphones (e.g., installing and configuring apps), and on the web (e.g., booking a flight). Some human-computer dialogs involve an exchange of system-initiated and user-initiated actions. These dialogs are called &lt;em&gt;mixed-initiative dialogs&lt;/em&gt; and sometimes also involve the pursuit of multiple interleaved sub-dialogs, which are woven together in a manner akin to coroutines. However, existing dialog-authoring languages have difficulty expressing these dialogs concisely. In this work, we improve the expressiveness of a dialog-authoring language we call &lt;em&gt;dialog specification language&lt;/em&gt; (DSL), which is based on the programming concepts of functional application, partial function application, currying, and partial evaluation, by augmenting it with additional abstractions to support concise specification of task-based, mixed-initiative dialogs that resemble concurrently executing coroutines. We also formalize the semantics of DSL—the process of simplifying and staging such dialogs specified in the language. We demonstrate that dialog specifications are compressed by to a higher degree when written in DSL using the new abstractions. We also operationalize the formal semantics of DSL in a Haskell functional programming implementation. The Haskell implementation of the simplification/staging rules provides a proof of concept that the formal semantics are sufficient to implement a dialog system specified with the language. We evaluate DSL from practical (i.e., case study), conceptual (i.e., comparisons to similar systems such as VoiceXML and State Chart XML), and theoretical perspectives. The practical applicability of the new language abstractions introduced in this work is demonstrated in a case study by using it to model portions of an online food ordering system that can be concurrently staged. Our results indicate that DSL enables concise representation of dialogs composed of multiple concurrent sub-dialogs and improves the compression of dialog expressions reported in prior research. We anticipate that the extension of our language and the formalization of the semantics can facilitate concise specification and smooth implementation of task-based, mixed-initiative, human-computer dialog systems across various domains such as ATMs and interactive, voice-response systems.&lt;/p&gt;
</summary>
      
    </entry>
  
</feed>
