In this article we describe how to build a gRPC client in a desktop application. The sample app demonstrates
- simple RPC calls,
- server-side streaming,
- client-side streaming,
- bidirectional streaming,
- client-side load balancing,
- client-side logging, and
- health monitoring.
The app is built with WinUI 3 and WinAppSdk. Unlike UWP these two libraries run on .NET 6.0. The app has access to the full Windows API, and can integrate any .NET Core NuGet package. Therefor we believe that the future is bright for XAML and WinUI 3 in building rich integrated tools and utilities – including clients for gRPC servers.
Our sample Visual Studio solution contains an ASP.NET gRPC Host (requiring Visual Studio 2022 with the ASP.NET and Web Development workload installed) and a WinUI 3 Desktop Client. In this article we’ll focus only on the client. Here’s a great intro into the architecture of gRPC. If you’re more into source code, then check this code sample. If you have experience with WCF and want to learn about gRPC, then this is an awesome PDF for you.
The service we implemented hosts simple calls, server-side streaming, client-side streaming, and bidirectional streaming methods. It represents Star Trek’s transporter room (“Beam me up, Scotty!”) and comes with methods to move lifeforms or groups of lifeforms from and to different places. For a deeper introduction to the use case, check our UWP blog post from which we boldly reused the gRPC server side. The client app represents a console panel for the server app.
Here’s how the UI of our sample client looks like. It comes with
- a console to display messages,
- an on/off button,
- a button to switch the beam direction,
- a button to switch between single target and group target,
- a button to activate the beam (energize), and
- a panic button to switch two parties – just an excuse to have a bidirectional streaming call.

gRPC makes the interaction between client and server look like local procedure calls. That’s not a surprise: it’s basically the definition of RPC. The API looks like methods that accept and return data structures. The contract is expressed in the ProtoBuf format.
Here’s the definition of the data in our sample app – a location and a lifeform:
// It's life, Jim, but not as we know it.
message LifeForm {
string species = 1;
string name = 2;
string rank = 3;
}
// A place in space.
message Location {
string description = 1;
}
And these are the transporter room’s methods:
// Transporter Room API
service Transporter {
// Beam up a single life form from a location.
rpc BeamUp(Location) returns (LifeForm) {}
// Beam up a party of life forms to a location.
rpc BeamUpParty(Location) returns (stream LifeForm) {}
// Beam down a single life form, and return the location.
rpc BeamDown(LifeForm) returns (Location) {}
// Beam up a party of life forms, and return the location.
rpc BeamDownParty(stream LifeForm) returns (Location) {}
// Replace a beamed down party of life forms by another.
rpc ReplaceParty(stream LifeForm) returns (stream LifeForm) {}
}
All the C# classes that are needed can be generated from these protobuf files in the server project, including the ones for the client app. We generated them in the default location. You can override all of this to e.g., generate the code into a separate project:

We made a copy of these generated classes in the WinUI 3 client:

Here’s an overview of the classes – a LifeForm, a Location, and the TransporterClient:

Just add the Google.Protobuf and Grpc.Core NuGet packages and you’re ready to go. As you see in the following screenshot, more packages will be added later on:

Setting up the connection
The TransporterClient is a lightweight ClientBase<T> proxy to the gRPC service. It sends and receives messages and data through a Channel. Our client app keeps these in some private fields:
private ChannelBase _channel;
private TransporterClient _client;
Creating a Channel is relatively slow and expensive, but you can keep it open as long as you want. A channel can be shared between different clients – we’ll do that later on. Here’s how to open a channel:
private void OpenChannel()
{
_channel = new Channel("localhost:5175", ChannelCredentials.Insecure);
// Optional: deadline.
// Uncomment the delay in Server Program.cs to test this.
// await _channel.ConnectAsync(deadline: DateTime.UtcNow.AddSeconds(2));
_client = new TransporterClient(_channel);
}
And this is how you close one:
private async void CloseChannel()
{
await _channel.ShutdownAsync();
}
We were using the Grpc.Core.Channel class here. We’ll introduce you to a more powerful alternative later on.
Sending messages
Here’s an example of a simple call –BeamUpOne()– that takes a Location and returns a LifeForm:
private void BeamUpOne()
{
var location = new Location
{
Description = Data.Locations.WhereEver()
};
var lifeForm = _client.BeamUp(location);
WriteLog($"Beamed up {lifeForm.Rank} {lifeForm.Name} ({lifeForm.Species}) from {location.Description}.");
}
As you see: no rocket science involved here. All methods have an asynchronous variant that supports a timeout scenario. Here’s how our BeamDownOne() method looks like:
private async Task BeamDownOne()
{
var lifeForm = Data.LifeForms.WhoEver();
try
{
// Uncomment the delay in the Service method to test the deadline.
var location = await _client.BeamDownAsync(lifeForm, deadline: DateTime.UtcNow.AddSeconds(5));
WriteLog($"Beamed down {lifeForm.Rank} {lifeForm.Name} ({lifeForm.Species}) to {location.Description}.");
}
catch (RpcException ex) when (ex.StatusCode == StatusCode.DeadlineExceeded)
{
WriteLog("!!! Beam down timeout.");
}
}
Our sample app goes into server-side streaming mode when the target is set to ‘Party’, and the beam direction to ‘Up’: a group of lifeforms will be sent to the transporter room. The call returns a ResponseStream through which our client can iterate with MoveNext() and get the individual lifeforms via Current:
private async Task BeamUpParty()
{
var location = new Location
{
Description = Data.Locations.WhereEver()
};
using (var lifeForms = _client.BeamUpParty(location))
{
while (await lifeForms.ResponseStream.MoveNext())
{
var lifeForm = lifeForms.ResponseStream.Current;
WriteLog($"- Beamed up {lifeForm.Rank} {lifeForm.Name} ({lifeForm.Species}).");
}
}
}
We enter client-side streaming mode when a ‘Party’ is beamed ‘Down’ to a location. The client initiates the call, writes all lifeforms to a RequestStream, and calls Complete() when he’s done:
private async void BeamDownParty()
{
var rnd = _rnd.Next(2, 5);
var lifeForms = new List<LifeForm>();
for (int i = 0; i < rnd; i++)
{
var lifeForm = Data.LifeForms.WhoEver();
lifeForms.Add(lifeForm);
}
using (var call = _client.BeamDownParty())
{
foreach (var lifeForm in lifeForms)
{
await call.RequestStream.WriteAsync(lifeForm);
WriteLog($"- Beamed down {lifeForm.Rank} {lifeForm.Name} ({lifeForm.Species}).");
}
await call.RequestStream.CompleteAsync();
var location = await call.ResponseAsync;
WriteLog($"- Party beamed down to {location.Description}.");
}
}
Unsurprisingly bidirectional streaming is a combination of server-side and client-side streaming: we have a RequestStream as well as a ResponseStream to walk through. In our ReplaceParty method (the one behind the ‘Red Alert’ button) we used a DispatcherQueue to enable reading and writing simultaneously:
private async void ReplaceParty()
{
var rnd = _rnd.Next(2, 5);
var lifeForms = new List<LifeForm>();
for (int i = 0; i < rnd; i++)
{
var lifeForm = Data.LifeForms.WhoEver();
lifeForms.Add(lifeForm);
}
DispatcherQueue dispatcherQueue = DispatcherQueue.GetForCurrentThread();
using (var call = _client.ReplaceParty())
{
var responseReaderTask = Task.Run(async () =>
{
while (await call.ResponseStream.MoveNext())
{
var beamedDown = call.ResponseStream.Current;
dispatcherQueue.TryEnqueue(() =>
{
WriteLog($"- Beamed down {beamedDown.Rank} {beamedDown.Name} ({beamedDown.Species}).");
});
}
});
foreach (var request in lifeForms)
{
await call.RequestStream.WriteAsync(request);
WriteLog($"- Beamed up {request.Rank} {request.Name} ({request.Species}).");
};
await call.RequestStream.CompleteAsync();
await responseReaderTask;
}
}
Here’s a screenshot of the action. Observe that the up- and downloads happen simultaneously:

Client-side load balancing
The gRPC service in our sample app listens to two ports, as configured in its launchSettings.json file:
"applicationUrl": "http://localhost:5175;http://localhost:7175"
To balance the load over these two addresses on the client side, we switched to a more powerful type of channel: Grpc.NET.Client.GrpcChannel. Our client creates a resolver for the two addresses, provides it to a channel with a local address via a ServiceCollection, and configures it for round-robin:
private void OpenLoadBalancingChannel()
{
var factory = new StaticResolverFactory(addr => new[]
{
new BalancerAddress("localhost", 7175),
new BalancerAddress("localhost", 5175)
});
var services = new ServiceCollection();
services.AddSingleton<ResolverFactory>(factory);
_channel = GrpcChannel.ForAddress(
"static:///transporter-host",
new GrpcChannelOptions
{
Credentials = ChannelCredentials.Insecure,
ServiceProvider = services.BuildServiceProvider(),
ServiceConfig = new ServiceConfig { LoadBalancingConfigs = { new RoundRobinConfig()
});
_client = new TransporterClient(_channel);
}
We’re using classes from the Grpc.Net.Client and Microsoft.Extensions.DependencyInjection NuGet packages here. In the log you see that the client (or actually the channel) now alternates between the two addresses:

Here’s the full documentation for client-side load balancing.
Logging
Here’s how we made the gRPC client logging. We added the Microsoft.Extensions.Logging and Microsoft.Extensions.Debug NuGet packages. We created a LoggerFactory:
var loggerFactory = LoggerFactory.Create(logging =>
{
logging.AddDebug();
logging.SetMinimumLevel(LogLevel.Debug);
});
And added it to the channel options:
_channel = GrpcChannel.ForAddress(
"static:///transporter-host",
new GrpcChannelOptions
{
Credentials = ChannelCredentials.Insecure,
ServiceProvider = services.BuildServiceProvider(),
LoggerFactory = loggerFactory,
ServiceConfig = new ServiceConfig { LoadBalancingConfigs = { new RoundRobinConfig() } }
});
_client = new TransporterClient(_channel);
That’s OK for debug logging, for production logging we wanted to add a separate console window to the project. Here’s the LoggerFactory for this – it requires the Microsoft.Extensions.Logging NuGet package:
var loggerFactory = LoggerFactory.Create(logging =>
{
logging.AddConsole();
});
This will log to the console window, but our client app does not have one yet. There are two ways to add a console window to a desktop application:
- changing the Output type of the project from Windows Application to Console Application, or
- explicitly calling AllocConsole().
The former looks and feels a bit like a hack. We chose the latter option, since it clearly expresses the intention. AllocConsole() is an unmanaged (non-.NET) Windows function, so you have to import it before you can call it. Here’s the import and the call:
[System.Runtime.InteropServices.DllImport("kernel32.dll")]
private static extern bool AllocConsole();
public MainWindow()
{
InitializeComponent();
AllocConsole();
}
Here we are now, with separate consoles for the server and the client side:

Health Monitoring
If you want to do heartbeat checks and health monitoring in your gRPC client, all you need is … yet another NuGet package: Grpc.HealthCheck. For the sake of completeness, we first added a canonical health check to our server:
builder.Services.AddGrpcHealthChecks()
.AddCheck("Proforma", () => HealthCheckResult.Healthy());
// ...
app.MapGrpcHealthChecksService();
In the client, we added a timer to monitor the server state at a regular interval. We created a new HealthClient and shared the channel of our transporter client. As we already mentioned: multiple clients can share the same channel. Here’s the code:
private void HeartBeatTimer_Tick(object sender, object e)
{
var client = new Health.HealthClient(_channel);
try
{
var response = client.Check(new HealthCheckRequest());
WriteLog($"*** Transporter service status: {response.Status}.");
}
catch (Exception ex)
{
WriteLog($"*** Transporter service error: {ex.Message}.");
}
}
Here’s a view of our sample solution in action, with regular calls, health check calls, a client log, and a server log:

Here’s the full documentation on gRPC health checks.
In this article we went through a gRPC client in a WinAppSDK and WinUI 3 desktop app. We demonstrated simple and streaming calls, client-side load balancing, client-side logging, and health monitoring. That sample app lives here on GitHub.
Enjoy!