Prosecution Insights
Last updated: April 19, 2026
Application No. 18/598,894

AUTOMATED VERIFICATION OF STATIC AND DYNAMIC GRAPHIC OBJECTS

Non-Final OA §102§103
Filed
Mar 07, 2024
Examiner
GODDARD, TAMMY
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Honeywell International Inc.
OA Round
1 (Non-Final)
30%
Grant Probability
At Risk
1-2
OA Rounds
5y 4m
To Grant
49%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
41 granted / 138 resolved
-32.3% vs TC avg
Strong +20% interview lift
Without
With
+19.5%
Interview Lift
resolved cases with interview
Typical timeline
5y 4m
Avg Prosecution
10 currently pending
Career history
148
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
59.4%
+19.4% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 138 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 10 is objected to because of the following informalities: In the fifth line of the claim, the word gase should be the word gaze. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5, 8, 9, 11-14 and 17-20 are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Wang (U. S. Patent Application Publication 2019/0286115 A1, already of record, hereafter ‘115). Regarding claim 1, Wang teaches a method for automatic verification of static and dynamic graphic objects rendered by a graphic engine (‘115; figs. 5, 7 and 10; Abstract; An object-based integrity verification method for detecting hazardously misleading information contained in an output image of a graphics processing unit…verifying the location, shape and color information for each safety-critical graphic object, and tracking the visibility and overlaying property between safety-critical graphic objects…detecting and evaluating failure condition(s); and annunciating (detected) failure condition(s) with proper warning level and appropriate corrective action), the method comprising: receiving a signal from the graphic engine (‘115; fig. 6, element 460, ¶ 0080, The object-based integrity verification processor 450 received the control and data information 460 from an external host processor, which is not shown in FIG. 6.; fig. 7, element 546, ¶ 0096, The video output generated from the GPU in section 545 is also sent to an object-based verification process section 550 of the present invention through a verification data bus 546 to check the image integrity – control and data signals 460 are received via verification data bus 546) indicating that a triggering event has occurred (‘115; fig. 6, element 460, ¶ 0080, The object-based integrity verification processor 450 received the control and data information 460 from an external host processor, basic control signals from a GPU to render an image – at least vertical sync to check at least the per frame associated parameters described in ¶ 0051, frame count, etc.; ¶ 0094, CPU function block 515 comprises a sensor interface and verification section 520 for receiving and validating the sensor input data. If a sensor failure is detected, the sensor interface and verification section 520 will automatically switch to the working sensor source and annunciate this corrective action….; ¶ 0109; As shown in FIG. 10, graphic commands received from ports 810 and 812 are verified by a command and input verification function block 804 to detect any missing, inconsistent or invalid graphic command. Function block 804 is also shown as step 732 in FIG. 9. The command and input verification function block 804 also verifies the keyboard input 826 received from keyboard 802 to detect any inconsistent or invalid keyboard input. This function is also shown as step 742 of FIG. 9. Valid keyboard input event is forwarded to the responsible UA for processing); and for each triggering event: copying data that includes at least one graphic object from a frame buffer to a test buffer (‘115; ¶ 0086, The output RGB image is rendered by an external COTS GPU (not shown in FIG. 6.) with embedded sequential frame ID number in the vertical sync back porch and a line error detecting code CRC in the horizontal blank period. The output RGB image is sent to a display device to communicate with the pilot. This output RGB image is also looped back to the integrity verification processor system 400. This RGB image 410 is stored in a temporary RGB buffer in the RGB buffer….); calculating a timing value for the at least one graphic object (‘115; ¶ 0050; The third step is to create an object verification database for the current video frame, as shown in step 336. This database comprises a plurality of parameters selected from a group comprising a sequential frame counter to identify the current video frame being displayed on the monitor screen….); overlaying the timing value on the at least one graphic object (‘115; ¶ 0051,… Embed a sequential frame ID number in the GPU's drawing buffer for detecting a potential frame buffer swapping failure. Include this sequential frame ID number in a non-visible area of the corresponding video frame, preferably in the vertical sync back porch area, for detecting the freezing screen fault…); and comparing the at least one graphic object with an expected graphic object (‘115; Abstract, identifying a plurality of safety-critical graphic objects in the output image; assigning each safety-critical graphic object an object ID code; creating an object verification database; rendering a monochrome reference ID code image using object ID code as its color component; verifying the location, shape and color information for each safety-critical graphic object, and tracking the visibility and overlaying property between safety-critical graphic objects using the monochrome reference ID code image; detecting and evaluating failure condition; fig. 5, elements 342S, 342T and 342C; ¶ 0052, The next step is to scan and verify the output RGB image, as shown in step 342. Step 342 includes three optional sub-steps, including 342S, 342T and 342C. Only one verification sub-step is required at a time based on the complicity of the display and the available system resources. Verification sub-step 342T utilizes a single channel reference ID code image to distinguish and identify the objects in the output RGB image, and is suitable for most PFD display screens. Verification sub-step 342C utilizes a four channel RGB+ID reference image and is most suitable for a complex PFD display screen with more symbols, more color contents and video background (such as camera image or synthetic vision background). Verification sub-step 342S can be used to verify a simple display frame with fewer selected objects and fewer overlaid layers. Verification sub-step 342S do not require additional graphic engine support. Thus, the verification sub-step 342S can be used as a backup verification procedure, in case 342T or 342C monitor hardware fails). Regarding claim 2, Wang teaches the method of claim 1 and further teaches wherein the triggering event indicates at least one of: preparation of the at least one graphic object by the graphic engine; a refresh of the at least one graphic object by the graphic engine; and interaction of a user with the at least one graphic object (‘115; ¶ 0112, During the run time, EDU processor 806 sends GPU 818 graphic drawing commands to render the display output in the video memory 820 based on the verified command and keyboard input 828. The EDU processor 806 also sends the graphic drawing commands to the object-based integrity verification processor 816 to render the required reference image, such as reference ID code image 425, as shown in FIG. 6. The video output image 822 is sent to the LCD panel 824 for display. The video output 822 is also looped back to the object-based integrity verification processor 816. The object-based integrity verification processor 816 verifies the video output 822 using the verification process depicted in FIG. 5 and the verification database stored in verification memory 814 to detect any invalid video frame or failed safety-critical graphic object. This process is also shown as step 752 of FIG. 9; ). Regarding claim 3, Wang teaches the method of claim 1 and further teaches wherein calculating the timing value comprises: incrementing a counter (‘115; ¶ 0050, Embed a sequential frame ID number in the GPU's drawing buffer for detecting a potential frame buffer swapping failure. Include this sequential frame ID number in a non-visible area of the corresponding video frame, preferably in the vertical sync back porch area, for detecting the freezing screen fault); and multiplying the counter by a rate (‘115; ¶ 0047). Regarding claim 4, Wang teaches the method of claim 1 and further teaches wherein comparing the at least one graphic object with the expected graphic object comprises saving the data in the test buffer for offline analysis (‘115; ¶ 0079, …The reference image buffer 440 and the verification memory 470 can be implemented using memory devices supported by the selected FPGA; ¶ 0080,…. An object verification database 495 is created following the process 336, and is stored in the verification memory 470; ¶ 0069,…The third task of verification sub-step 342C is to scan in the reference RGB+ID code image, verify the error detecting codes, and save the images into memory for later usage; ¶ 0087,….If this HSV value is not belong to any of the above color bins, the counter of the out of tolerance bin will be increased by one. The out of tolerance HSV value, as well as the pixel's X-Y coordinates will be saved in a separate area in the selected histogram page for later evaluation). Regarding claim 5, Wang teaches the method of claim 1 and further teaches wherein comparing the at least one graphic object with the expected graphic object comprises transmitting the at least one graphic object to another computing device (‘115; fig. 7, elements 515 and 540, the at least one graphic object is transmitted from computing device 515 to computing device 540 where it is validated; ¶ 0096, The video output generated from the GPU in section 545 is also sent to an object-based verification process section 550 of the present invention through a verification data bus 546 to check the image integrity. Video frame parameters and a plurality of safety-critical graphic objects rendered by the GPU in section 545 are verified to detect any image fault, such as image freezing, incorrect safety-critical graphic object's shape, size, color, orientation and readability problems.). Regarding claim 8, Wang teaches the method of claim 1 and further teach wherein copying the data comprises reading pixel data for pixels displayed on a display unit (‘115; ¶ 0045, The verification zone covers either the entire or just a portion of the video frame to be verified and contains all selected safety-critical graphic objects. Reference to FIG. 2, the verification zone can be selected to cover only the attitude indicator 120 in the primary display screen 100, as an example. Each selected object may represent a unique function, such as bank pointer 280 or airplane's nose symbol 220 in the attitude indicator 200). Regarding claim 9, Wang teaches a system (‘115; figs. 5, 7 and 10; ¶ 0004, This invention relates to safety-critical display application, and more particularly, to methods and systems for preventing and detecting Hazardously Misleading Information (HMI) on safety-critical aircraft cockpit displays.) comprising: a display system (‘115 fig. 7, element 560; ¶ 0097, Display and keyboard function block, 560, comprises a flat panel display and user input devices, such as keyboard or touch device.); one or more memory units configured to store one or more buffers for storing information that can be rendered as graphic objects (‘115; fig. 6; ¶ 0079, …The reference image buffer 440 and the verification memory 470 can be implemented using memory devices supported by the selected FPGA; ¶ 0080,…. An object verification database 495 is created following the process 336, and is stored in the verification memory 470); and one or more processors configured to execute computer instructions that implement (‘115; ¶ 0092, As shown in FIG. 7, the exemplary cockpit display system 500 of the present invention comprises a sensor inputs and Air Data Computer (ADC) function block 505, a Central Processor Unit (CPU) function block 515, a GPU function block 540 and a display and keyboard function block 560 – at least two processors): a graphic engine configured to generate one or more graphic objects on the display system (‘115; figs. 5, 7 and 10; Abstract; An object-based integrity verification method for detecting hazardously misleading information contained in an output image of a graphics processing unit…verifying the location, shape and color information for each safety-critical graphic object, and tracking the visibility and overlaying property between safety-critical graphic objects…detecting and evaluating failure condition(s); and annunciating (detected) failure condition(s) with proper warning level and appropriate corrective action); and a graphic automation framework configured to respond to triggering events (‘115; fig. 6, element 460, ¶ 0080, The object-based integrity verification processor 450 received the control and data information 460 from an external host processor, basic control signals from a GPU to render an image – at least vertical sync to check at least the per frame associated parameters described in ¶ 0051, frame count, etc.; ¶ 0094, CPU function block 515 comprises a sensor interface and verification section 520 for receiving and validating the sensor input data. If a sensor failure is detected, the sensor interface and verification section 520 will automatically switch to the working sensor source and annunciate this corrective action….; ¶ 0109; As shown in FIG. 10, graphic commands received from ports 810 and 812 are verified by a command and input verification function block 804 to detect any missing, inconsistent or invalid graphic command. Function block 804 is also shown as step 732 in FIG. 9. The command and input verification function block 804 also verifies the keyboard input 826 received from keyboard 802 to detect any inconsistent or invalid keyboard input. This function is also shown as step 742 of FIG. 9. Valid keyboard input event is forwarded to the responsible UA for processing) by: copying data describing at least one graphic object in the one or more graphic objects from a frame buffer (‘115; ¶ 0051,… render the video output RGB image, as shown in step 340. This can be done by a COTS GPU based on the Definition File (DF) and the widget library according to the ARINC 661 specification. The rendered image may include anti-aliasing, alpha blending and dithering features. Embed a sequential frame ID number in the GPU's drawing buffer for detecting a potential frame buffer swapping failure) in the one or more buffers (‘115; fig. 6; ¶ 0079, …The reference image buffer 440 and the verification memory 470 can be implemented using memory devices supported by the selected FPGA; ¶ 0080,…. An object verification database 495 is created following the process 336, and is stored in the verification memory 470) into a test buffer in the one or more buffers (‘115; ¶ 0086, The output RGB image is rendered by an external COTS GPU (not shown in FIG. 6.) with embedded sequential frame ID number in the vertical sync back porch and a line error detecting code CRC in the horizontal blank period. The output RGB image is sent to a display device to communicate with the pilot. This output RGB image is also looped back to the integrity verification processor system 400. This RGB image 410 is stored in a temporary RGB buffer in the RGB buffer….); determine a timing value for the at least one graphic object (‘115; ¶ 0050; The third step is to create an object verification database for the current video frame, as shown in step 336. This database comprises a plurality of parameters selected from a group comprising a sequential frame counter to identify the current video frame being displayed on the monitor screen….); associating the timing value with the at least one graphic object (‘115; ¶ 0051,… Embed a sequential frame ID number in the GPU's drawing buffer for detecting a potential frame buffer swapping failure. Include this sequential frame ID number in a non-visible area of the corresponding video frame, preferably in the vertical sync back porch area, for detecting the freezing screen fault…); and compare the at least one graphic object with an expected graphic object (‘115; Abstract, identifying a plurality of safety-critical graphic objects in the output image; assigning each safety-critical graphic object an object ID code; creating an object verification database; rendering a monochrome reference ID code image using object ID code as its color component; verifying the location, shape and color information for each safety-critical graphic object, and tracking the visibility and overlaying property between safety-critical graphic objects using the monochrome reference ID code image; detecting and evaluating failure condition; fig. 5, elements 342S, 342T and 342C; ¶ 0052, The next step is to scan and verify the output RGB image, as shown in step 342. Step 342 includes three optional sub-steps, including 342S, 342T and 342C. Only one verification sub-step is required at a time based on the complicity of the display and the available system resources. Verification sub-step 342T utilizes a single channel reference ID code image to distinguish and identify the objects in the output RGB image, and is suitable for most PFD display screens. Verification sub-step 342C utilizes a four channel RGB+ID reference image and is most suitable for a complex PFD display screen with more symbols, more color contents and video background (such as camera image or synthetic vision background). Verification sub-step 342S can be used to verify a simple display frame with fewer selected objects and fewer overlaid layers. Verification sub-step 342S do not require additional graphic engine support. Thus, the verification sub-step 342S can be used as a backup verification procedure, in case 342T or 342C monitor hardware fails). Regarding claim 11, Wang teaches the system of claim 9 and further teaches wherein the graphic automation framework is configured to determine the timing value by incrementing a counter (‘115; ¶ 0050, Embed a sequential frame ID number in the GPU's drawing buffer for detecting a potential frame buffer swapping failure. Include this sequential frame ID number in a non-visible area of the corresponding video frame, preferably in the vertical sync back porch area, for detecting the freezing screen fault) and multiplying the counter by a rate (‘115; ¶ 0047). Regarding claim 12, Wang teaches the system of claim 9 and further teaches wherein the graphic automation framework is configured to compare the at least one graphic object by saving the data in the test buffer for offline analysis (‘115; ¶ 0079, …The reference image buffer 440 and the verification memory 470 can be implemented using memory devices supported by the selected FPGA; ¶ 0080,…. An object verification database 495 is created following the process 336, and is stored in the verification memory 470; ¶ 0069,…The third task of verification sub-step 342C is to scan in the reference RGB+ID code image, verify the error detecting codes, and save the images into memory for later usage; ¶ 0087,….If this HSV value is not belong to any of the above color bins, the counter of the out of tolerance bin will be increased by one. The out of tolerance HSV value, as well as the pixel's X-Y coordinates will be saved in a separate area in the selected histogram page for later evaluation). Regarding claim 13, Wang teaches the system of claim 9 and further teaches wherein the graphic engine and the graphic automation framework are configured to share at least one processor in the one or more processors (‘115; fig. 10, element 818; ¶ 0112, During the run time, EDU processor 806 sends GPU 818 graphic drawing commands to render the display output in the video memory 820 based on the verified command and keyboard input 828. The EDU processor 806 also sends the graphic drawing commands to the object-based integrity verification processor 816 to render the required reference image, such as reference ID code image 425, as shown in FIG. 6. The video output image 822 is sent to the LCD panel 824 for display. The video output 822 is also looped back to the object-based integrity verification processor 816. The object-based integrity verification processor 816 verifies the video output 822 using the verification process depicted in FIG. 5 and the verification database stored in verification memory 814 to detect any invalid video frame or failed safety-critical graphic object. This process is also shown as step 752 of FIG. 9) and at least one memory unit in the one or more memory units (‘115; fig. 10, element 808, ¶ 0111, If the cockpit display system is designed according to the ARINC 661 protocol, the EDU processor 806 configures the display screen based on a Graphic User Interface (GUI) screen Definition File (DF) and a graphic library stored in system memory 808. Based on the DF file, the EDU processor 806 generates a verification database, and sends it to the object-based integrity verification processor 816. The verification database is, then, stored in the verification memory 814.). Regarding claim 14, Wang teaches the system of claim 9 and further teaches wherein the graphic automation framework is configured to transmit the data describing the at least one graphic object to another computing device (‘115; fig. 7, elements 515 and 540, the at least one graphic object is transmitted from computing device 515 to computing device 540 where it is validated; ¶ 0096, The video output generated from the GPU in section 545 is also sent to an object-based verification process section 550 of the present invention through a verification data bus 546 to check the image integrity. Video frame parameters and a plurality of safety-critical graphic objects rendered by the GPU in section 545 are verified to detect any image fault, such as image freezing, incorrect safety-critical graphic object's shape, size, color, orientation and readability problems.). Regarding claim 17, Wang teaches the system of claim 9 and further teaches wherein the display system consists of resistive or capacitive touch interface configured to receive at least one of touch and gestures from a user (‘115; ¶ 0097, Display and keyboard function block, 560, comprises a flat panel display and user input devices, such as keyboard or touch device; ¶ 0106, EDU also verifies the integrity of the local user inputs from keypad and touch screen if applicable, as indicated in step 740). Regarding claim 18, Wang teaches a system (‘115; fig. 8; ¶ 0099, FIG. 8 shows a first redundant system embodiment of the present invention with the GPU function block integrated in the display unit. Cockpit display system 600 comprises four substantially identical Electronics Display Units (EDU) 602, 604, 606 and 608. Each EDU has a built-in GPU and an object-based verification system of the present invention as depicted in FIG. 6. EDUs 602 and 604 are used by the pilot; while EDUs 606 and 608 are used by the co-pilot. Under the normal operating condition, EDUs 602 and 608 are used for PFD functions; while EDUs 604 and 606 are used for navigation and MFD functions. Since all of these four EDUs are substantially identical, EDUs 604 and 606 can be used for PFD functions if EDUs 602 and 608 become unavailable) comprising: a server (‘115; fig. 8, elements 650 and 660; ¶ 0099-0100; two independent IMA computers 650 and 660 serving four substantially identical Electronics Display Units (EDU) 602, 604, 606 and 608; redundant cockpit display system embodiment 600) configured to implement a graphic engine configured to generate one or more graphic objects on a display unit (‘115; ¶ 0101, Each of IMA computers 650 and 660 hosts a User Application (UA) software, which generates graphic commands for each EDU based on the selected sensor input source. The UA software is implemented using an ARINC 661 or other suitable protocol).; and a graphic automation framework is configured to receive the one or more graphic objects generated by the graphic engine and respond to triggering events (‘115; fig. 6, element 460, ¶ 0080, The object-based integrity verification processor 450 received the control and data information 460 from an external host processor, basic control signals from a GPU to render an image – at least vertical sync to check at least the per frame associated parameters described in ¶ 0051, frame count, etc.; ¶ 0094, CPU function block 515 comprises a sensor interface and verification section 520 for receiving and validating the sensor input data. If a sensor failure is detected, the sensor interface and verification section 520 will automatically switch to the working sensor source and annunciate this corrective action….; ¶ 0109; As shown in FIG. 10, graphic commands received from ports 810 and 812 are verified by a command and input verification function block 804 to detect any missing, inconsistent or invalid graphic command. Function block 804 is also shown as step 732 in FIG. 9. The command and input verification function block 804 also verifies the keyboard input 826 received from keyboard 802 to detect any inconsistent or invalid keyboard input. This function is also shown as step 742 of FIG. 9. Valid keyboard input event is forwarded to the responsible UA for processing) by: copying data describing at least one graphic object in the one or more graphic objects from a frame buffer into a test buffer (‘115; ¶ 0051,… render the video output RGB image, as shown in step 340. This can be done by a COTS GPU based on the Definition File (DF) and the widget library according to the ARINC 661 specification. The rendered image may include anti-aliasing, alpha blending and dithering features. Embed a sequential frame ID number in the GPU's drawing buffer for detecting a potential frame buffer swapping failure) in the one or more buffers (‘115; fig. 6; ¶ 0079, …The reference image buffer 440 and the verification memory 470 can be implemented using memory devices supported by the selected FPGA; ¶ 0080,…. An object verification database 495 is created following the process 336, and is stored in the verification memory 470) into a test buffer in the one or more buffers (‘115; ¶ 0086, The output RGB image is rendered by an external COTS GPU (not shown in FIG. 6.) with embedded sequential frame ID number in the vertical sync back porch and a line error detecting code CRC in the horizontal blank period. The output RGB image is sent to a display device to communicate with the pilot. This output RGB image is also looped back to the integrity verification processor system 400. This RGB image 410 is stored in a temporary RGB buffer in the RGB buffer….); determine a counter value for the at least one graphic object (‘115; ¶ 0050; The third step is to create an object verification database for the current video frame, as shown in step 336. This database comprises a plurality of parameters selected from a group comprising a sequential frame counter to identify the current video frame being displayed on the monitor screen….); associate the counter value on the at least one graphic object (‘115; ¶ 0051,… Embed a sequential frame ID number in the GPU's drawing buffer for detecting a potential frame buffer swapping failure. Include this sequential frame ID number in a non-visible area of the corresponding video frame, preferably in the vertical sync back porch area, for detecting the freezing screen fault…); and compare the at least one graphic object with an expected graphic object (‘115; Abstract, identifying a plurality of safety-critical graphic objects in the output image; assigning each safety-critical graphic object an object ID code; creating an object verification database; rendering a monochrome reference ID code image using object ID code as its color component; verifying the location, shape and color information for each safety-critical graphic object, and tracking the visibility and overlaying property between safety-critical graphic objects using the monochrome reference ID code image; detecting and evaluating failure condition; fig. 5, elements 342S, 342T and 342C; ¶ 0052, The next step is to scan and verify the output RGB image, as shown in step 342. Step 342 includes three optional sub-steps, including 342S, 342T and 342C. Only one verification sub-step is required at a time based on the complicity of the display and the available system resources. Verification sub-step 342T utilizes a single channel reference ID code image to distinguish and identify the objects in the output RGB image, and is suitable for most PFD display screens. Verification sub-step 342C utilizes a four channel RGB+ID reference image and is most suitable for a complex PFD display screen with more symbols, more color contents and video background (such as camera image or synthetic vision background). Verification sub-step 342S can be used to verify a simple display frame with fewer selected objects and fewer overlaid layers. Verification sub-step 342S do not require additional graphic engine support. Thus, the verification sub-step 342S can be used as a backup verification procedure, in case 342T or 342C monitor hardware fails). Regarding claim 19, Wang teaches the system of claim 18 and further teaches wherein the server communicates with a different computation device that executes the graphic automation framework (‘115; fig. 7, elements 515 and 540, the at least one graphic object is transmitted from computing device 515 to computing device 540 where it is validated; ¶ 0096, The video output generated from the GPU in section 545 is also sent to an object-based verification process section 550 of the present invention through a verification data bus 546 to check the image integrity. Video frame parameters and a plurality of safety-critical graphic objects rendered by the GPU in section 545 are verified to detect any image fault, such as image freezing, incorrect safety-critical graphic object's shape, size, color, orientation and readability problems.), wherein the different computation device and the server share at least one computational resource (‘115; fig. 10, element 818; ¶ 0112, During the run time, EDU processor 806 sends GPU 818 graphic drawing commands to render the display output in the video memory 820 based on the verified command and keyboard input 828. The EDU processor 806 also sends the graphic drawing commands to the object-based integrity verification processor 816 to render the required reference image, such as reference ID code image 425, as shown in FIG. 6. The video output image 822 is sent to the LCD panel 824 for display. The video output 822 is also looped back to the object-based integrity verification processor 816. The object-based integrity verification processor 816 verifies the video output 822 using the verification process depicted in FIG. 5 and the verification database stored in verification memory 814 to detect any invalid video frame or failed safety-critical graphic object. This process is also shown as step 752 of FIG. 9). Regarding claim 20, Wang teaches the system of claim 18 and further teaches wherein the graphic automation framework is configured to compare the at least one graphic object with the expected graphic object as directed by a test script (‘115; fig. 5; Abstract, identifying a plurality of safety-critical graphic objects in the output image; assigning each safety-critical graphic object an object ID code; creating an object verification database; rendering a monochrome reference ID code image using object ID code as its color component; verifying the location, shape and color information for each safety-critical graphic object, and tracking the visibility and overlaying property between safety-critical graphic objects using the monochrome reference ID code image; detecting and evaluating failure condition; fig. 5, elements 342S, 342T and 342C; ¶ 0052, The next step is to scan and verify the output RGB image, as shown in step 342. Step 342 includes three optional sub-steps, including 342S, 342T and 342C. Only one verification sub-step is required at a time based on the complicity of the display and the available system resources. Verification sub-step 342T utilizes a single channel reference ID code image to distinguish and identify the objects in the output RGB image, and is suitable for most PFD display screens. Verification sub-step 342C utilizes a four channel RGB+ID reference image and is most suitable for a complex PFD display screen with more symbols, more color contents and video background (such as camera image or synthetic vision background). Verification sub-step 342S can be used to verify a simple display frame with fewer selected objects and fewer overlaid layers. Verification sub-step 342S do not require additional graphic engine support. Thus, the verification sub-step 342S can be used as a backup verification procedure, in case 342T or 342C monitor hardware fails). Claim Rejections - 35 USC § 103 The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 6, 7, 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (U. S. Patent Application Publication 2019/0286115 A1, already of record, hereafter ‘115) as applied to claims 1-5, 8, 9, 11-14 and 17-20 above, and in view of Pletter (U. S. Patent Application Publication 2011/0276946 A1, already of record, hereafter ‘946). Regarding claim 6, Wang teaches the method of claim 1 and further teaches wherein copying the data comprises comparing pixel data of the at least one graphic object against previously acquired pixel data (‘115; ¶ 0070) but does not explicitly teach to determine whether the at least one graphic object is new. Pletter, working in the same field of endeavor, however, teaches how to determine whether the at least one graphic object is new (‘946; fig. 1, elements 122, 124 126 and 128; ¶ 0067-0068, In some instances, a mismatch between an expected image and a test image may be expected. For example, a developer may have modified the user interface component definition with the intention of effecting a change in the visual presentation of the user interface component. Accordingly, if the test image and the expected image do not match, a determination is made at 124 as to whether the output shown in the test image is expected. The determination at 124 may be made by presenting a message to a user and receiving an indication from the user.) for the benefit of reducing false alarm indications that maybe caused by intentional user interaction with the display systems. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for determining whether the at least one graphic object is new as taught by Pletter with the methods to implement automatic verification of static and dynamic graphic objects rendered by a graphic engine as taught by Wang for the benefit of reducing false alarm indications that may be caused by intentional user interaction with the display systems. Regarding claim 7, Wang and Pletter teach the method of claim 6 and further teach the method as further comprising swapping the previously acquired pixel data with the pixel data when the at least one graphic object is new (‘115; ¶ 0112, The video output image 822 is sent to the LCD panel 824 for display. The video output 822 is also looped back to the object-based integrity verification processor 816. The object-based integrity verification processor 816 verifies the video output 822 using the verification process depicted in FIG. 5 and the verification database stored in verification memory 814 to detect any invalid video frame or failed safety-critical graphic object). Regarding claim 15, Wang teaches the system of claim 9 and does not teach wherein the graphic automation framework is configured to copy the data describing the at least one graphic object when pixel data associated with the at least one graphic object is different from previously acquired pixel data for previously rendered graphic objects. Pletter, working in the same field of endeavor, however, teaches wherein the graphic automation framework is configured to copy the data describing the at least one graphic object (‘946; fig. 1, element 118; ¶ 0058, At 118, a pre-generated image representing an expected visual presentation of the user interface component is retrieved {copied}. This pre-generated image is also known as an expected image, a baseline image, or a gold file. An example of an expected image is displayed in the user interface shown in FIG. 3B. ) when pixel data associated with the at least one graphic object is different from previously acquired pixel data for previously rendered graphic objects (‘946; fig. 1, elements 122, 124 126 and 128; ¶ 0067-0068, In some instances, a mismatch between an expected image and a test image may be expected. For example, a developer may have modified the user interface component definition with the intention of effecting a change in the visual presentation of the user interface component. Accordingly, if the test image and the expected image do not match, a determination is made at 124 as to whether the output shown in the test image is expected. The determination at 124 may be made by presenting a message to a user and receiving an indication from the user.) for the benefit of reducing false alarm indications that maybe caused by intentional user interaction with the display systems. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for determining the graphic automation framework is configured to copy the data describing the at least one graphic object when pixel data associated with the at least one graphic object is different from previously acquired pixel data for previously rendered graphic objects as taught by Pletter with the methods to implement automatic verification of static and dynamic graphic objects rendered by a graphic engine as taught by Wang for the benefit of reducing false alarm indications that may be caused by intentional user interaction with the display systems. Regarding claim 16, Wang and Pletter teach the system of claim 15 and further teach wherein the graphic automation framework is further configured to swap the previously acquired pixel data with the pixel data associated with the at least one graphic object (‘946; fig. 1, element 118; ¶ 0058, At 118, a pre-generated image representing an expected visual presentation of the user interface component is retrieved {swapped}. This pre-generated image is also known as an expected image, a baseline image, or a gold file. An example of an expected image is displayed in the user interface shown in FIG. 3B.). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Wang (U. S. Patent Application Publication 2019/0286115 A1, already of record, hereafter ‘115) as applied to claims 1-9 and 11-20 above, and in view of Gruber (U. S. Patent Application Publication 2020/0098165 A1, hereafter ‘165). Regarding claim 10, Wang teaches the system of claim 9 and further teaches wherein the triggering events indicate at least one of: preparation of the at least one graphic object by the graphic engine (‘115; ¶ 0050, create an object verification database for the current video frame, as shown in step 336); a refresh of the at least one graphic object by the graphic engine (‘115; ¶ 0112, The video output image 822 is sent to the LCD panel 824 for display. The video output 822 is also looped back to the object-based integrity verification processor 816. The object-based integrity verification processor 816 verifies the video output 822 using the verification process depicted in FIG. 5 and the verification database stored in verification memory 814 to detect any invalid video frame or failed safety-critical graphic object); and interaction of a user with the at least one graphic object (‘115; ¶ 0112, During the run time, EDU processor 806 sends GPU 818 graphic drawing commands to render the display output in the video memory 820 based on the verified command and keyboard input 828. The EDU processor 806 also sends the graphic drawing commands to the object-based integrity verification processor 816 to render the required reference image, such as reference ID code image 425, as shown in FIG. 6. The video output image 822 is sent to the LCD panel 824 for display. The video output 822 is also looped back to the object-based integrity verification processor 816. The object-based integrity verification processor 816 verifies the video output 822 using the verification process depicted in FIG. 5 and the verification database stored in verification memory 814 to detect any invalid video frame or failed safety-critical graphic object. This process is also shown as step 752 of FIG. 9), and does not teach wherein the interaction of the user with the at least one graphic object is through at least one of eye gase control, speech, touch, gesture, cursor control device, keyboard, and knobs. Gruber, working in the same field of endeavor, however, teaches wherein the interaction of the user with the at least one graphic object is through at least one of eye gaze control, speech, touch, gesture, cursor control device, keyboard, and knobs (‘165; 0054-0055; [0054] The device 100 may include or be connected to one or more input devices 113. In some examples, the one or more input devices 113 may include one or more of: a touch screen, a mouse, a peripheral device, an audio input device (e.g., a microphone or any other visual input device), a visual input device (e.g., a camera, an eye tracker, or any other visual input device), any user input device, or any input device configured to receive an input from a user. In some examples, the display 103 may be a touch screen display; and, in such examples, the display 103 constitutes an example input device 113. In the example of FIG. 1A, the one or more input devices 113 is shown as including an eye gaze input device 113-1. The eye gaze input device 113-1 may be configured to determine where a user of device 100 is looking, such as where a user is looking on a display (e.g., the display 103)) for the benefit of providing a range of user interface input/control options. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for accommodating the interaction of the user with the at least one graphic object through at least one of eye gaze control, speech, touch, gesture, cursor control device, keyboard, and knobs as taught by Gruber with the methods to implement automatic verification of static and dynamic graphic objects rendered by a graphic engine as taught by Wang for the benefit of providing a range of user interface input/control options. Conclusion The following prior art, made of record, was not relied upon but is considered pertinent to applicant's disclosure: US 11,221,932 B2 Methods and Systems for Monitoring the Integrity of a GPU – Methods and systems for monitoring the integrity of a graphics processing unit (GPU) are provided. The method comprises the steps of determining a known-good result associated with an operation of the GPU, and generating a test image comprising a test subject using the operation of the GPU, such that the test subject is associated with the known-good result. The test image is written to video memory, and the known-good result is written to system memory. Subsequently, the test subject from the test image is transferred from video memory to system memory. The test subject in the system memory is compared with the known-good result in system memory. If the test subject does not match the known-good result, then a conclusion is drawn that the integrity of the GPU has been compromised. US 20200159647 A1 Testing User Interfaces Using Machine Vision – Methods, systems, apparatuses, and computer program products are provided for validating a graphical user interface (GUI). An application comprising the GUI may be executed. A test script may also be executed that is configured to interact with the GUI of the application. Images representing the GUI of the application may be captured at different points in time, such as different interaction points. For each image, a set of tags that identify expected objects may be associated with the image. A model may be applied that classifies one or more graphical objects identified in each image. Based on the associated set of tags and the classification of the graphical objects in the image, each image may be validated, thereby enabling the validation of the GUI of the application. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD MARTELLO whose telephone number is (571)270-1883. The examiner can normally be reached on M-F from 9AM to 5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard, can be reached at telephone number (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD MARTELLO/ Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Mar 07, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573004
GENERATIVE IMAGE FILLING USING A REFERENCE IMAGE
2y 5m to grant Granted Mar 10, 2026
Patent 12548257
Systems and Methods for 3D Facial Modeling
2y 5m to grant Granted Feb 10, 2026
Patent 12530839
RELIGHTABLE NEURAL RADIANCE FIELD MODEL
2y 5m to grant Granted Jan 20, 2026
Patent 12462480
IMAGE PROCESSING METHOD
2y 5m to grant Granted Nov 04, 2025
Patent 10140972
TEXT TO SPEECH PROCESSING SYSTEM AND METHOD, AND AN ACOUSTIC MODEL TRAINING SYSTEM AND METHOD
2y 5m to grant Granted Nov 27, 2018
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
30%
Grant Probability
49%
With Interview (+19.5%)
5y 4m
Median Time to Grant
Low
PTA Risk
Based on 138 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month