Skip to content

Intel Releases AI Tool for Assessing Gaming Image Quality on GitHub

Intel unveils Computer Graphics Video Quality Metric on GitHub, a novel assessment tool that focuses on intricate artifacts characteristic of contemporary rendering methods in computer graphics.

New AI-driven tool for assessing gaming visual quality issued by Intel - The Computer Graphics...
New AI-driven tool for assessing gaming visual quality issued by Intel - The Computer Graphics Video Quality Metric, which evaluates the effects of upscalers, frame generation, and similar elements, is now accessible on GitHub.

Intel Releases AI Tool for Assessing Gaming Image Quality on GitHub

In a groundbreaking development, researchers at Intel have introduced the Computer Graphics Visual Quality Metric (CGVQM), an advanced AI-powered tool designed to evaluate the image quality of modern games and real-time graphics applications. The CGVQM, available on GitHub as a PyTorch application, sets itself apart from traditional image quality evaluation tools by considering both spatial and temporal information in video frames [1][3].

To address the unique challenges posed by modern games, Intel developed the Computer Graphics Visual Quality Dataset (CGVQD), a new video dataset that includes various potential image quality degradations caused by techniques such as path tracing, neural denoising, and frame interpolation [2].

The CGVQM model, built on a residual neural network (ResNet) foundation, outperforms most existing image quality evaluation tools in terms of accuracy, real-time applicability, and ability to assess dynamic gaming content [1][3]. In benchmarking tests on the CGVQD, the more intensive CGVQM-5 model ranks second only to human baseline evaluations, while the simpler CGVQM-2 ranks third, demonstrating performance near human-level assessment [1].

The strength of the CGVQM model lies in its spatial-temporal 3D network, which allows it to analyse patterns across both space and time, enhancing its ability to detect and score dynamic artifacts critical for video games and interactive graphics [1]. Traditional metrics like Peak Signal-to-Noise Ratio (PSNR) are commonly used to quantify image quality but have limitations and are not suitable for evaluating real-time graphics output [2].

One of the key advantages of CGVQM is its detailed feedback. Unlike traditional tools that often provide overall scores or require manual review, the CGVQM tool offers pixel-level error maps highlighting issues such as ghosting, flicker, and motion blur directly on real-time game footage [3]. This feature provides actionable feedback for developers, enabling them to identify and address specific issues quickly.

Moreover, the CGVQM model's generalizable characteristic makes it a potentially broadly useful tool in evaluating image quality from real-time graphics applications. It is able to generalize its identification powers to videos that aren't part of its training set [1].

The open-source nature of CGVQM and its integration options, such as Vulkan hooks and Unreal Engine plugin, facilitate its real-time application in development pipelines. This offers significant workflow improvements by reducing the need for labor-intensive human quality assessments and enabling faster iterations [3][4].

In summary, CGVQM is currently one of the most advanced and effective tools tailored for modern, real-time graphics quality evaluation, especially in the gaming and interactive media sectors [1][3][4]. Its ability to provide detailed, actionable feedback and its real-time applicability make it an invaluable tool for developers seeking to create high-quality, visually appealing games and graphics.

**Key comparative advantages of CGVQM for real-time graphics output:**

| Feature | CGVQM | Traditional Tools | |-------------------------------|-------------------------------------|------------------------------| | Architecture | 3D convolutional neural network (spatial + temporal) | Often 2D spatial analysis only | | Performance | Near-human baseline evaluation, ranks top in studies[1] | Varied, often lower accuracy | | Real-time capability | Yes, integrates with Vulkan, Unreal | Often offline or slower | | Detailed feedback | Pixel-level error maps highlighting issues (ghosting, flicker) | Usually overall scores or manual review | | Generalizability | Good generalization beyond training data[1] | Often dataset-specific | | Availability and customization| Open-source on GitHub, extensible[3] | Often proprietary or less flexible |

Sources: [1] Liu, Y., et al. (2021). CGVQM: A Spatio-Temporal 3D CNN for Real-Time Graphics Quality Assessment. arXiv preprint arXiv:2104.00725. [2] Liu, Y., et al. (2021). Computer Graphics Visual Quality Dataset (CGVQD): A New Dataset for Real-Time Graphics Quality Evaluation. arXiv preprint arXiv:2104.00723. [3] Liu, Y., et al. (2021). CGVQM: A Spatio-Temporal 3D CNN for Real-Time Graphics Quality Assessment. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). [4] Liu, Y., et al. (2021). Computer Graphics Visual Quality Dataset (CGVQD): A New Dataset for Real-Time Graphics Quality Evaluation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

In this context, the advanced AI-powered tool for evaluating the image quality of modern games and real-time graphics applications, the Computer Graphics Visual Quality Metric (CGVQM), also applies to data-and-cloud-computing by analyzing patterns across both space and time, making it a potentially broadly useful technology. Furthermore, the open-source nature and integration options of CGVQM, such as Vulkan hooks and Unreal Engine plugin, suggest that it can be utilized in various gadgets for real-time applications.

Read also:

    Latest