So, in the new version of my app, I use iced's custom shader widget to render images. I've been using a rendering pipeline and shader that directly calculate vertices of images since I'd like to keep the aspect ratio of images when user resizes them. I had a function updating vertices and screen rect buffer like this:
pub fn update_vertices(&mut self, device: &wgpu::Device, bounds_relative: (f32, f32, f32, f32)) {
let (x, y, width, height) = bounds_relative;
let left = 2.0 * x - 1.0;
let right = 2.0 * (x + width) - 1.0;
let top = 1.0 - 2.0 * y;
let bottom = 1.0 - 2.0 * (y + height);
let vertices: [f32; 16] = [
left, bottom, 0.0, 1.0, // Bottom-left
right, bottom, 1.0, 1.0, // Bottom-right
right, top, 1.0, 0.0, // Top-right
left, top, 0.0, 0.0, // Top-left
];
self.vertex_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Quad Vertex Buffer"),
contents: bytemuck::cast_slice(&vertices),
usage: wgpu::BufferUsages::VERTEX | wgpu::BufferUsages::COPY_DST,
});
}
pub fn update_screen_uniforms(
&self,
queue: &wgpu::Queue,
image_dimensions: (u32, u32),
shader_size: (u32, u32),
bounds_relative: (f32, f32, f32, f32),
) {
let debug = false;
let shader_width = shader_size.0 as f32;
let shader_height = shader_size.1 as f32;
let image_width = image_dimensions.0 as f32;
let image_height = image_dimensions.1 as f32;
let vertices = self.vertices;
let (_left, bottom, _right, _top) = (vertices[0], vertices[1], vertices[2], vertices[3]);
// Compute aspect ratios
let image_aspect = image_width / image_height;
let shader_aspect = shader_width / shader_height;
// Calculate scale factors - the key is to use the SMALLER dimension to maintain aspect ratio
let (scale_x, scale_y, fit_mode) = if image_aspect > shader_aspect {
// Image is wider than container - fit width
let scale = shader_width / image_width;
(scale, scale, "FIT_WIDTH")
} else {
// Image is taller than container - fit height
let scale = shader_height / image_height;
(scale, scale, "FIT_HEIGHT")
};
// Apply scaling to get final dimensions
let scaled_width = image_width * scale_x;
let scaled_height = image_height * scale_y;
// Calculate the scale factors relative to the container size
let final_scale_x = scaled_width / shader_width;
let final_scale_y = scaled_height / shader_height;
// Calculate the vertical gap that needs to be distributed
let gap_y = shader_height - scaled_height;
// Calculate offset to center the scaled image vertically
// Fine-tune the vertical offset with a correction factor to match Image widget
// The bottom + 1.0 term accounts for asymmetric NDC space
let offset_correction = 0.001; // Fine-tuning parameter (may need adjustment)
let offset_y_ndc = (bottom + 1.0) * (1.0 - final_scale_y) / 2.0 + offset_correction;
let screen_rect_data = [
final_scale_x, // Scale X
final_scale_y, // Scale Y
0.0, // Offset X (centered horizontally)
offset_y_ndc, // Offset Y to center vertically
];
// Update screen rect buffer
queue.write_buffer(
&self.screen_rect_buffer,
0,
bytemuck::cast_slice(&screen_rect_data),
);
}
However, I noticed that the rendered image would "jiggle" slightly when resizing the window. At first, I assumed the layout math was off. But it turned out to be a deeper issue with how the layout and rendering were coupled.
Calculating the screen rect buffer at the shader level can be fragile. For example, I was using NDC-space vertex coordinates to calculate uniforms like this:
let vertices = self.vertices;
let (_left, bottom, _right, _top) = (vertices[0], vertices[1], vertices[2], vertices[3]);
...
let offset_y_ndc = (bottom + 1.0) * (1.0 - final_scale_y) / 2.0 + offset_correction;
I did need this to make the screen uniforms work correctly, but notice that NDC coordinates are in the range [-1.0, 1.0]. This means that even a tiny floating-point error (like 0.001) can shift the image by several pixels, especially on high-resolution screens.
Therefore, I decided to decouple layout calculations from rendering calculations like iced does in their widgets. I just pre-calculate the layout bounds in layout(), and then just render the shader full screen within the layout bounds. This way, the shader side doesn't have to worry about the layout calculations, and the layout calculations are only done once in layout(). It became much more smooth as you can see in the video below!