Prosecution Insights
Last updated: April 19, 2026
Application No. 17/680,221

DISTRIBUTED APPLICATION TESTING IN CLOUD COMPUTING ENVIRONMENTS

Final Rejection §103
Filed
Feb 24, 2022
Examiner
WAI, ERIC CHARLES
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
3 (Final)
82%
Grant Probability
Favorable
4-5
OA Rounds
3y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
529 granted / 644 resolved
+27.1% vs TC avg
Strong +27% interview lift
Without
With
+27.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
27 currently pending
Career history
671
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 644 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending in this application. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-7, 8-14, 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over BELIHOMJI et al. Pub. No.: US 2021/0303451A1, in view of “Game Tester Developer Platform Demo” (hereafter Game Tester), in view of Greene et al. (US PG Pub No. 2002/0172931 A1), further in view of Jaeh et al. (US PG Pub No. 2018/0225982 A1). Regarding claim 1, Belihomji teaches the invention substantially as claimed including a method comprising: selecting, via an application hosting platform executed using a processing device, an application to conduct a test session (Figs 1A-1B, item 110 testing platform, test application 1-M; [0015] As shown in FIG. 1B, and by reference number 125, testing platform 110 may process data identifying the parameters and data identifying the application, with a machine learning model, to generate test applications for testing corresponding modifications to the application.); wherein the application is hosted using a virtualized computing environment instantiated using the application hosting platform ([0063] Cloud computing environment 410 includes an environment that hosts testing platform 110…cloud computing environment 410 may include a group of computing resources 420; [0065] computing resource 420 includes a group of cloud resources, such as one or more applications (“APPs”) 420-1; [0066] Application 420-1 includes one or more software applications that may be provided to or accessed by user device 105. Application 420-1 may eliminate a need to install and execute the software applications on user device 105.); selecting, via the application hosting platform, a set of users associated with the application hosting platform to interact with the application during the test session ([0020] As shown in FIG. 1C, and by reference number 130, testing platform 110 may define, based on the parameters, test group sizes of test groups for testing the test applications. For example, testing platform 110 may define each test group size to be a particular percentage (e.g., five percent, ten percent, and/or the like) of user devices 105 to which testing will be applied. [0021] Testing platform 110 may define the test group sizes based on different types of user devices 105 (e.g., smart glasses, smart phones, smart watches, laptops, tablets, computers, and/or the like) associated with the users, different hardware (e.g., different types of processors, different amounts of memory, and/or the like) associated with user devices 105, different software (e.g., different operating systems) utilized by user devices 105, different geographic locations of user devices 105, and/or the like.); selecting, via the application hosting platform, a set of observers associated with the application hosting platform to monitor an interaction between the application and one or more users of the set of users during the test session ([0026] The feedback may include user devices 105 monitoring user interaction with the test applications, capturing data related to the user interaction, and providing the data related to the user interaction to testing platform 110. The test applications may include software (e.g., embedding within the test applications) that monitors (e.g., with user permission) how the users interact with the test application and automatically provides information indicating the interactions that to testing platform 110. The test applications may include software (as observers ) (e.g., embedding within the test applications) that monitors (e.g., with user permission) how the users interact with the test application and automatically provides information indicating the interactions that to testing platform 110; [0061] Testing platform 110 includes one or more devices that utilize machine learning to generate modified applications for concurrent testing. In some implementations, testing platform 110 may be designed to be modular such that certain software components may be swapped in or out depending on a particular need. As such, testing platform 110 may be easily and/or quickly reconfigured for different uses (as the system is modular Belihomji’s system would be required to select the software if monitoring is to be performed). In some implementations, testing platform 110 may receive information from and/or transmit information to one or more user devices 105); initiating, via the application hosting platform, the test session upon selecting the application, the set of users, and the set of observers (Fig. 1E, 140, Test group 1-M; Fig. 1F, 145 concurrently provide the test application (as initiating) [Fig. 6] After generating an application 610, assigning the set of users 650, the test application is provided (as initiating the test session) to the sets of the plurality of user devices 660); causing content data corresponding to the application to be streamed to a user device corresponding to each user of the set of users for presentation in a player graphical user interface (GUI) ([0024] As shown in FIG. 1F, and by reference number 145, testing platform 110 may concurrently provide the test applications to the corresponding sets of the plurality of user devices 105 based on the test groups. In some implementations, testing platform 110 may enable one or more sandboxes to execute concurrent test applications for each set of the plurality of user devices 105 and to isolate one test application from another test application to measure an exact impact of each test application (e.g., by testing different combinations of the steps of the application [0037] As indicated above, FIGS. 1A-1I are provided merely as examples. Other examples may differ from what was described with regard to FIGS. 1A-1I. The number and arrangement of devices and networks shown in FIGS. 1A-1I are provided as an example. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown in FIGS. 1A-1I. Furthermore, two or more devices shown in FIGS. 1A-1I may be implemented within a single device, or a single device shown in FIGS. 1A-1I may be implemented as multiple, distributed devices (as a distributed system the test applications are required to be streamed). Additionally, or alternatively, a set of devices (e.g., one or more devices) of FIGS. 1A-1I may perform one or more functions described as being performed by another set of devices of FIGS. 1A-1I); causing a video stream of the test session associated with each user from the set of users ([0026] The feedback may include user devices 105 monitoring user interaction with the test applications, capturing data related to the user interaction, and providing the data related to the user interaction to testing platform 110…the software may automatically capture video and/or audio associated with the users (e.g., while the users interact with the test applications) and provide the video and/or audio to testing platform 110; [0027] For example, testing platform 110 may provide a graphical user interface that identifies the test applications, the modifications made to the application to generate the test applications, and the user feedback associated with the test applications. Belihomji does not teach authenticating, via the application hosting platform, the set of users interacting with the application during the test session. However, Game Tester teaches authenticating, via the application hosting platform, the set of users interacting with the application during the test session. ([2:27] Depicts a bullet point that says “Authenticate testers and restrict access to your title where required” when integrating their API when using the Game Tester platform) PNG media_image1.png 50 528 media_image1.png Greyscale It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Belihomji with Game Tester’s platform. A person of ordinary skill in the art would have been motivated to make this combination in order to restrict access to your title where required [Game Tester 2:27]. Belihomji does not teach causing a video stream of the test session associated with each user from the set of users to be transmitted to a user device of a corresponding observer from the set of observers for presentation in an observer GUI. Greene teaches providing a mechanism by which a testing environment may be monitored from a remote location ([0049]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to transmit the video stream to a user device of a corresponding observer. A person of ordinary skill in the art would have been motivated to make this combination in order to allow for remote monitoring as taught by Greene. Belihomji does not teach detecting, during a period of the video stream of the test session, observer input via the observer GUI at the user device of the corresponding observer from the set of observers, wherein the observer input identifies an occurrence of an event of interest in the video stream of the test session. Jaeh teaches a system for monitoring a test-taker (user) in real-time by collecting behavioral input from their camera, microphone, keyboard, and mouse and allowing a proctor or exam facilitator (observer) to observe the session in real-time and interact with the test-taker using voice and chat ([0063]; [0068]; [0145], wherein proctors use an interface/GUI). Jaeh further teaches allowing the system to detects aberrant behavior and the proctor or system may flag (observer input) the session for further review ([0063]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to detect, during a period of the video stream of the test session, observer input via the observer GUI at the user device of the corresponding observer from the set of observers, wherein the observer input identifies an occurrence of an event of interest in the video stream of the test session. A person of ordinary skill in the art would have been motivated to make this combination in order to allow for the flagging of aberrant behavior for further review and/or training of machine learning models as taught by Jaeh ([0173]). Regarding claim 2, Belihomji, Game Tester, Greene, and Jaeh teach the method of claim 1. Belihomji teaches wherein causing the video stream of the test session to be transmitted further comprises: receiving user input captured for each user from the set of users interacting with the content of the application ([0026] The feedback may include user devices 105 monitoring user interaction with the test applications, capturing data related to the user interaction); causing the user input to be transmitted to the user device of the corresponding observer from the set of observers for presentation in the observer GUI ([0026] providing the data related to the user interaction to testing platform 110 [0027] For example, testing platform 110 may provide a graphical user interface that identifies the test applications, the modifications made to the application to generate the test applications, and the user feedback associated with the test applications). Regarding claim 4, Belihomji, Game Tester, Greene, and Jaeh teach the method of claim 1. Belihomji teaches causing a video stream of the test session associated with two or more users of the set of users to be transmitted to the user device of the corresponding observer from the set of observers for presentation in the observer GUI. ([0010] The testing platform may assign, based on the test group sizes, sets of the plurality of user devices to the test groups for testing the test applications, and may provide the test applications concurrently to the corresponding sets of the plurality of user devices based on the test groups. The testing platform may receive from the corresponding sets of the plurality of user devices, and in near-real time, feedback associated with the test applications, and may perform one or more actions based on the feedback associated with the test applications; [0026] The software may automatically capture video and/or audio associated with the users (e.g., while the users interact with the test applications) and provide the video and/or audio to testing platform 110; [0027] For example, testing platform 110 may provide a graphical user interface that identifies the test applications, the modifications made to the application to generate the test applications, and the user feedback associated with the test applications). Regarding claim 5, Belihomji, Game Tester, Greene, and Jaeh teach the method of claim 1. Belihomji teaches storing, at a data store, a recording of the video stream of the test session associated with the user of the set of users after termination of the test session. (([0019] In some implementations, testing platform 110 may train the machine learning model with historical data (e.g., historical parameters to test modifications to applications, historical application data, and/or the like) (as historical data requires information to be from a past test session and therefore it must be stored after a session ends) [0026] The software may automatically capture video and/or audio associated with the users (e.g., while the users interact with the test applications) and provide the video and/or audio to testing platform 110. The video and/or audio may provide indications of whether the users are happy, angry, frustrated, and/or the like with the test applications). Regarding claim 6, Belihomji, Game Tester, Greene, and Jaeh teach the method of claim 5. Belihomji teaches allowing the corresponding observer to access, via the observer GUI, the recording of the video stream of the test session after termination of the test session. ([0100] collect, store, employ personal information of individuals; [0019] In some implementations, testing platform 110 may train the machine learning model with historical data (e.g., historical parameters to test modifications to applications, historical application data, and/or the like) (as historical data requires information to be from a past test session and therefore it must be stored after a session ends); [0026] The software may automatically capture video and/or audio associated with the users (e.g., while the users interact with the test applications) and provide the video and/or audio to testing platform 110; [0027] For example, testing platform 110 may provide a graphical user interface that identifies the test applications, the modifications made to the application to generate the test applications, and the user feedback associated with the test applications). Regarding claim 7, Belihomji, Game Tester, Greene, and Jaeh teach the method of claim 1. Jaeh teaches storing, at a data store, an indication of the period during which the observer input is detected in response to the detecting the observer input.([0063]; [0068]) Regarding claim 8, Belihomji, Game Tester, Greene, and Jaeh teach the method of claim 1. Game Tester teaches selecting, via the application hosting platform, a duration for the test session upon selecting the application; and terminating, via the application hosting platform, the test session after the duration expires. ([12:35-12:48 Transcript] “You can select a specific end date if you if you have a strict deadline and you need the test absolutely to finish by that day” [12:45 Video] Depicts the option to select a test date start and end date on the game tester platform). PNG media_image2.png 699 1333 media_image2.png Greyscale Regarding claim 9, Belihomji, Game Tester, Greene, and Jaeh teach the method of claim 1. Belihomji teaching modifying, via the application hosting platform, the test session, wherein the modification comprises at least one of modifying the set of users, ([0021] Testing platform 110 (as the application hosting platform) may define the test group sizes based on different types of user devices 105 (e.g., smart glasses, smart phones, smart watches, laptops, tablets, computers, and/or the like) associated with the users, different hardware (e.g., different types of processors, different amounts of memory, and/or the like) associated with user devices 105, different software (e.g., different operating systems) utilized by user devices 105, different geographic locations of user devices 105, and/or the like) modifying the set of observers, ([0026] The feedback may include user devices 105 monitoring user interaction with the test applications, capturing data related to the user interaction, and providing the data related to the user interaction to testing platform 110. The test applications may include software (e.g., embedding within the test applications) that monitors (e.g., with user permission) how the users interact with the test application and automatically provides information indicating the interactions that to testing platform 110. The test applications may include software (as observers) (e.g., embedding within the test applications) that monitors (e.g., with user permission) how the users interact with the test application and automatically provides information indicating the interactions that to testing platform 110 [0061] Testing platform 110 includes one or more devices that utilize machine learning to generate modified applications for concurrent testing. In some implementations, testing platform 110 may be designed to be modular such that certain software components may be swapped in or out depending on a particular need. As such, testing platform 110 may be easily and/or quickly reconfigured for different uses (as the system is modular Belihomji’s system would be required to select the software if monitoring is to be performed). In some implementations, testing platform 110 may receive information from and/or transmit information to one or more user devices 105); …, or modifying a build of the application ([0033] Additionally, or alternatively, testing platform 110 may propose one or more additional modifications to the application based on the feedback associated with the test applications, and may generate one or more new test applications based on the one or more additional modifications to the application). Game tester teaches modifying a duration of the test session ([12:35-12:48 Transcript] “You can select a specific end date if you if you have a strict deadline and you need the test absolutely to finish by that day”). Regarding claim 10, Belihomji, Game Tester, Greene, and Jaeh teach the method of claim 1. Belihomji teaches implementing, using a server coupled with the processing device, a number of test sessions to be executed. ([0033] Additionally, or alternatively, testing platform 110 may generate one or more new test applications based on the feedback associated with the test applications, and may provide the one or more new test applications to one or more new sets of the plurality of user devices 105 [0072] Device 500 may correspond to user device 105, testing platform 110, and/or computing resource 420. In some implementations, user device 105, testing platform 110, and/or computing resource 420 may include one or more devices 500 and/or one or more components of device 500. As shown in FIG. 5, device 500 may include a bus 510, a processor 520, a memory 530, a storage component 540, an input component 550, an output component 560, and a communication interface 570). Regarding claim 11, Belihomji, Game Tester, Greene, and Jaeh teach the method of claim 1. Belihomji teaches causing, via the application hosting platform, information associated with the test session to be provided to at least one observer of the set of observers, wherein the information comprises at least one of the set of users or a build of the application to be presented in the observer GUI for the observer of the set of observer. ([0026] The feedback may include the users filling out feedback forms (as information of the set of users) and providing, via user devices 105, the feedback forms to testing platform 110. [0027] In some implementations, the one or more actions may include testing platform 110 providing a user interface that includes the feedback associated with the test applications. For example, testing platform 110 may provide a graphical user interface that identifies the test applications, the modifications made to the application to generate the test applications, and the user feedback associated with the test applications). Regarding claim 12, Belihomji, Game Tester, Greene, and Jaeh teach the method of claim 1. Game tester teaches causing a notification to be presented in the player GUI for each user device corresponding to each user of the set of users to execute the application for the test session in response to selecting the set of users. ([16:43 Video] Depicts a preview of the test invite users receive in their GUI. The test invite includes a reminder to start the test along with a time limit and a download) PNG media_image3.png 542 829 media_image3.png Greyscale Regarding claims 13-14, 16-19, they are apparatus claims of claims 1-2, 4-6, and 8. Therefore, they are rejected for the same reason as claims 1-2, 4-6, and 8 above. Claim 20 are rejected under 35 U.S.C. 103 as being unpatentable over BELIHOMJI et al. Pub. No.: US 2021/0303451A1 in view of Greene et al. (US PG Pub No. 2002/0172931 A1), further in view of Jaeh et al. (US PG Pub No. 2018/0225982 A1). Regarding claim 20, Belihomji teaches the invention substantially as claimed including create, via an application hosting platform executed using one or more processing devices, a test session ([Fig. 6] After generating an application 610, assigning the set of users 650, the test application is provided to the sets of the plurality of user devices 660); for an application hosted using a virtualized computing environment instantiated using the application hosting platform ([0063] Cloud computing environment 410 includes an environment that hosts testing platform 110…cloud computing environment 410 may include a group of computing resources 420; [0065] computing resource 420 includes a group of cloud resources, such as one or more applications (“APPs”) 420-1; [0066] Application 420-1 includes one or more software applications that may be provided to or accessed by user device 105. Application 420-1 may eliminate a need to install and execute the software applications on user device 105.) wherein creating the test session comprises selecting a set of users ([0020] As shown in FIG. 1C, and by reference number 130, testing platform 110 (as the application hosting platform) may define, based on the parameters, test group sizes of test groups for testing the test applications. For example, testing platform 110 may define each test group size to be a particular percentage (e.g., five percent, ten percent, and/or the like) of user devices 105 to which testing will be applied.; [0021] Testing platform 110 may define the test group sizes based on different types of user devices 105 (e.g., smart glasses, smart phones, smart watches, laptops, tablets, computers, and/or the like) associated with the users, different hardware (e.g., different types of processors, different amounts of memory, and/or the like) associated with user devices 105, different software (e.g., different operating systems) utilized by user devices 105, different geographic locations of user devices 105, and/or the like.) and selecting a set of observers associated with the application hosting platform ([0026] The feedback may include user devices 105 monitoring user interaction with the test applications, capturing data related to the user interaction, and providing the data related to the user interaction to testing platform 110. The test applications may include software (e.g., embedding within the test applications) that monitors (e.g., with user permission) how the users interact with the test application and automatically provides information indicating the interactions that to testing platform 110. The test applications may include software (as observers) (e.g., embedding within the test applications) that monitors (e.g., with user permission) how the users interact with the test application and automatically provides information indicating the interactions that to testing platform 110; [0061] Testing platform 110 includes one or more devices that utilize machine learning to generate modified applications for concurrent testing. In some implementations, testing platform 110 may be designed to be modular such that certain software components may be swapped in or out depending on a particular need. As such, testing platform 110 may be easily and/or quickly reconfigured for different uses (as the system is modular Belihomji’s system would be required to select the software if monitoring is to be performed). In some implementations, testing platform 110 may receive information from and/or transmit information to one or more user devices 105); cause a video stream of the test session associated with at least one user of the set of users ([0026] The feedback may include user devices 105 monitoring user interaction with the test applications, capturing data related to the user interaction, and providing the data related to the user interaction to testing platform 110…the software may automatically capture video and/or audio associated with the users (e.g., while the users interact with the test applications) and provide the video and/or audio to testing platform 110 [0027] For example, testing platform 110 may provide a graphical user interface that identifies the test applications, the modifications made to the application to generate the test applications, and the user feedback associated with the test applications.). storing, at a data store, a recording of the video stream of the test session associated with the at least one user of the set of users, wherein the recording of the video stream of the test session is accessible to the corresponding observer of the set of observers after termination of the test session. ([0019] In some implementations, testing platform 110 may train the machine learning model with historical data (e.g., historical parameters to test modifications to applications, historical application data, and/or the like) (as historical data requires information to be from a past test session and therefore it must be stored after a session ends); [0026] The software may automatically capture video and/or audio associated with the users (e.g., while the users interact with the test applications) and provide the video and/or audio to testing platform 110; [0027] For example, testing platform 110 may provide a graphical user interface that identifies the test applications, the modifications made to the application to generate the test applications, and the user feedback associated with the test applications). Belihomji does not teach causing a video stream of the test session associated with each user from the set of users to be streamed to a user device of a corresponding observer from the set of observers. Greene teaches providing a mechanism by which a testing environment may be monitored from a remote location ([0049]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to transmit the video stream to a user device of a corresponding observer. A person of ordinary skill in the art would have been motivated to make this combination in order to allow for remote monitoring as taught by Greene. Belihomji does not teach detecting, during a period of the video stream of the test session, observer input via the observer GUI at the user device of the corresponding observer from the set of observers, wherein the observer input identifies an occurrence of an event of interest in the video stream of the test session. Jaeh teaches a system for monitoring a test-taker (user) in real-time by collecting behavioral input from their camera, microphone, keyboard, and mouse and allowing a proctor or exam facilitator (observer) to observe the session in real-time and interact with the test-taker using voice and chat ([0063]; [0068]; [0145], wherein proctors use an interface/GUI). Jaeh further teaches allowing the system to detects aberrant behavior and the proctor or system may flag (observer input) the session for further review ([0063]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to detect, during a period of the video stream of the test session, observer input via the observer GUI at the user device of the corresponding observer from the set of observers, wherein the observer input identifies an occurrence of an event of interest in the video stream of the test session. A person of ordinary skill in the art would have been motivated to make this combination in order to allow for the flagging of aberrant behavior for further review and/or training of machine learning models as taught by Jaeh ([0173]). Claim 3 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over BELIHOMJI et al. Pub. No.: US 2021/0303451A1, “Game Tester Developer Platform Demo”, Greene et al. (US PG Pub No. 2002/0172931 A1), and Jaeh et al. (US PG Pub No. 2018/0225982 A1), as applied to claims 1 and 13 above, and in further view of “Frequently Asked Questions by Companies” (hereafter Antidote.gg). Regarding claim 3, Belihomji, Game Tester, Greene, and Jaeh teach the method according to claim 1. Belihomji teaches wherein causing the video stream of the test session to be transmitted further comprises: receiving a recording of each user from the set of users, ([0026] The software may automatically capture video and/or audio associated with the users (e.g., while the users interact with the test applications) causing the recording of each user from the set of users to be transmitted to the user device of the corresponding observer from the set of observers for presentation in the observer GUI. ([0026] provide the video and/or audio to testing platform 110 [0027] For example, testing platform 110 may provide a graphical user interface that identifies the test applications, the modifications made to the application to generate the test applications, and the user feedback associated with the test applications). Belihomji, Game Tester, Greene, and Jaeh do not explicitly teach wherein the recording is created using a webcam of each user device of each user from the set of users. However, Antidote.gg teaches wherein the recording is created using a webcam of each user device of each user from the set of users ([“What information can I get from a playtest?”] Antidote provides you with several tools that are configurable at the moment you define your project: …face recording). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Belihomji and Game Tester’s present invention with Antidote.gg’s UX and analytics platform. A person of ordinary skill in the art would have been motivated to make this combination in order to help fully understand how players experience their game [Antidote.gg “What information can I get from a playtest?”]. Regarding claim 15, it is an apparatus claim of claim 3. Therefore, it is rejected for the same reason as claim 3 above. Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC C WAI whose telephone number is (571)270-1012. The examiner can normally be reached Monday - Friday 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Eric C Wai/Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Feb 24, 2022
Application Filed
Oct 28, 2024
Non-Final Rejection — §103
Jan 09, 2025
Response Filed
Jul 05, 2025
Non-Final Rejection — §103
Oct 09, 2025
Response Filed
Jan 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602261
CONTAINER SCHEDULING ACCORDING TO PREEMPTING A SET OF PREEMPTABLE CONTAINERS DEPLOYED IN A CLUSTER
2y 5m to grant Granted Apr 14, 2026
Patent 12602248
METHOD AND DEVICE OF LAUNCHING AN APPLICATION IN BACKGROUND
2y 5m to grant Granted Apr 14, 2026
Patent 12585498
SYSTEM AND METHOD FOR RESOURCE MANAGEMENT IN DYNAMIC SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12585503
UNIFIED RESOURCE MANAGEMENT ARCHITECTURE FOR WORKLOAD SCHEDULERS
2y 5m to grant Granted Mar 24, 2026
Patent 12579001
REINFORCEMENT LEARNING SPACE STATE PRUNING USING RESTRICTED BOLTZMANN MACHINES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+27.2%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 644 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month