DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 19 and 20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter wherein the claim recites a processing program that is not claimed as embodied in a non-transitory storage medium. Because Applicant's disclosure is not limited solely to tangible embodiments, the claimed subject matter, given the broadest reasonable interpretation, may be a carrier wave comprises of instructions and is, therefore, non-statutory. The United States Patent and Trademark Office (USPTO) is obliged to give claims their broadest reasonable interpretation consistent with the specification during proceedings before the USPTO. See In re Zletz, 893 F.2d 319 (Fed. Cir. 1989) (during patent examination the pending claims must be interpreted as broadly as their terms reasonably allow). The broadest reasonable interpretation of a claim drawn to a computer readable storage medium typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable storage media, particularly when the specification is silent. (See MPEP 2111.01). When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. § 101 as covering non-statutory subject matter (See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for Evaluating Subject Matter Eligibility Under 35 U.S.C. § 101, Aug. 24, 2009; p. 2).
To overcome this type of rejection, the claims need to be amended to include only the physical computer storage media unassociated with any intangible or non-functional transmission media. Examiner suggests adding the word -- non-transitory -- to the claim. Other word choices will be considered but the one proposed shall overcome the rejection. Appropriate attention is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 6- 8, 10, 11, 14-16, 18 and 19 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Azapian (pub. no. 20220171698).
Regarding claim 1, Azapian discloses a computing system, comprising: a processor; and memory (“FIG. 3 shows an example of a computer network 300 in which the novel methods and system of the application may find use. In an aspect, one or more servers 302 (e.g., a local server) interconnected through a local wireless network, wide area network 324, or other network may execute the processes and algorithms described herein, automatically evaluating performance of a video game by local mobile device 320, or remote mobile device 326. In another aspect, one or more servers 330 (e.g., a device farm) may execute the processes and algorithms described herein, automatically evaluating performance of a video game by multiple mobile devices 334, 336, 338 connected to the device farm server 330. Mobile devices 334-338 may be different models running different operating systems”, [0038])
storing an intelligent agent application that, when executed by the processor, causes the processor to execute the intelligent agent application (“One or more processors of the servers may enable automatic performance evaluation of a video game at one or more mobile devices with the execution of an on-device agent application that simulates actions of a player. In one aspect, the one or more processors determine, for example by retrieving from storage 304, the execution context 306 for each mobile device. Based on the execution context 306, the one or more processors load a corresponding harness application 308 to the mobile device. In an aspect, the one or more processors load an agent application script 310 built into a video game to the mobile device. The video game and the built-in agent application may be pre-loaded to the mobile device or may be loaded to the mobile device by the one or more processors contemporaneously to loading the harness application, [0039])
and perform acts comprising: receiving an intelligent agent deployment request, wherein the intelligent agent deployment request comprises one or more tasks to be executed by an intelligent agent; deploying an intelligent agent to interact with a testing computing system, wherein the testing computing system is executing a video game application (“FIG. 4 shows aspects of a test case data structure 400 in a computer memory for use by one or more processors in automatically evaluating performance of a video game by a computing device. In an aspect, each test case data structure may be referenced by a test case identifier 410. Each test case identifier 410 may be, for example a code or an address. Based on the test case identifier 410, the one or more processors may look up further instructions and/or data in pre-defined data structures. A test case may generally specify the execution context, or the test device's testing environment. More specifically, an execution context 420 may identify the test device as a single mobile device, or an execution context 430 may identify a test farm. A device farm may include devices having different models and operating systems; thus each device has its own execution context stored in context table 432.
In an aspect, each execution context structure links to a corresponding harness application. Knowing the execution context for a test case, the one or more processors can look up and load the corresponding harness application. For example, a single-device execution context entry 420 links to a harness application 422. Similarly, each execution context of a device farm structure 432 links to a corresponding harness application in a harness application table 440. In an aspect, the test case identifier 410 may link to a game identifier 450 which may link at least to a video game code 452. In an aspect, the test case identifier 410 may also link to an agent application script 460. As described herein, one or more processors may build the game code and agent application script in one package before loading into a mobile device for testing. In an aspect, the video game identifier can be null when the video game is pre-loaded in the device.
In an aspect, the agent application may be a Finite State Machine (FSM) automata whose purpose is to imitate a player. A QA analyst or engineer may configure the agent application directly according to the test to perform. The agent application differs from the traditional automated QA process, where simple UI Automation scripts are limited to simulating mouse clicks or button taps. With the agent application, the analyst can reason in terms of in-game actions a real-life player would perform. In an aspect, the agent application may include game-play functions, debug functions, performance profiling functions and generic functions. The following lists examples of functions of an agent application. In practice, game functions will vary depending on the game and version”, [0041] - [0043]);
causing the intelligent agent to interact with the video game application and execute a first task of the one or more tasks; obtaining testing data from the intelligent agent, wherein the testing data is indicative of the execution of the first task; and storing the testing data at a data store (“At 508, one or more processors at the device execution context start the execution of the harness application. In an aspect, at 510, the harness application extracts and gathers variable values, for example configuration variables, from the mobile device, creates the agent application and starts the game. In another aspect, the harness application may direct the game to create the agent application. At 512, the game may run the script which configures the agent application. In an aspect, the game passes the script as a code “handle” to the agent. In an aspect, the game may create a test strategy, for example as specified in the script. The agent application executes its code instructions, gathers statistics and screenshots if so directed. In an aspect, the one or more processors may contemporaneously send the statistics and screenshots to a display. At 514, the harness application monitors for errors and for signal of test completion if test is a single test run. In an aspect, when the test completes the one or more processors at the device execution context collects test results, performs clean-up functions and terminates execution of the harness application”, [0098]).
Regarding claim 6, Azapian discloses prior to deploying the intelligent agent, obtaining the intelligent agent from an intelligent agent data store storing a plurality of intelligent agents, wherein the intelligent agent is selected based upon the intelligent agent deployment request ([0041] – [0042]).
Regarding claim 7, Azapian discloses subsequent to obtaining the intelligent agent from the intelligent agent data store, modifying the intelligent agent based upon input received by way of a user interface executing on a client computing device ([0043]).
Regarding claim 8, Azapian discloses the intelligent agent performs a second task based upon execution of the first task (“At computer process 106, the harness application starts the execution of the video game. The video game then waits for input actions. In an aspect, at the computer process 108, the video game starts the execution of the on-device agent application which provides input actions to the video games to simulate a player's actions. The agent application monitors the testing, gathers statistics and screenshots if so directed. At computer process 110, the harness application waits for signal from the agent application. In an aspect where a single test run is configured, for example by the QA analyst or administrator, the agent application signals that testing of the video game has completed when, for example, all script instructions have been executed. At 112, once testing of the video game has completed, the harness application performs clean up tasks, for example storing test results, clearing and freeing memory and data usage. In an aspect where continuous test is configured, the agent application restarts from the first instruction”, [0036]).
Regarding claim 10, Azapian discloses the intelligent agent interacts with the video game application by way of providing input to the video game application, the provided input comprising input selected from the group consisting of input from an emulated controller, input from an emulated sensor, input from an emulated keyboard, and input from an emulated mouse ([0036]).
Claims 11, 14-16 and 18 are directed to the methods implemented by the systems of claims 1, 6-8 and 10 respectively and are rejected for the same reasons as claims 1, 6-8 and 10 respectively.
Claim 19 is directed to an article of manufacture containing code that implements the system of claim 1 and is rejected for the same reasons as claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 2, 3, 5 12, 13 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Azapian (pub. no. 20220171698) in view of Cmielowski et al. (pub. no. 20210287109).
Regarding claim 2, 3 and 5, it is noted that Azapian does not disclose inputting testing data into sequential models to detect bugs. Cmielowski however, teaches disclose inputting testing data into sequential models to detect bugs (“In one embodiment of the present invention, a computer-implemented method for analyzing test result failures using artificial intelligence models comprises grouping a set of test failures within a plurality of test results into one or more sets of test failure groups according to a set of failure attributes. The method further comprises training a first machine learning model to differentiate between a bug failure and a test failure within the set of test failures based on the set of failure attributes and a set of historical failures. The method additionally comprises determining a failure type for each failed test in the one or more sets of test failure groups using the first machine learning model. Furthermore, the method comprises clustering the failed tests in the one or more sets of test failure groups into a set of clusters according to the set of failure attributes and the determined failure type for each failed test. Additionally, the method comprises identifying a root cause failure for each cluster based on the set of clusters and the set of failure attributes. In addition, the method comprises training a second machine learning model to predict a root cause of an unclassified failure based on identifying the root cause failure for each cluster. The method further comprises predicting the root cause of the unclassified failure using the second machine learning model”, [0003]).
Exemplary rationales that may support a conclusion of obviousness include use of known technique to improve similar devices (methods, or products) in the same way. Here both Azapian and Cmielowski are directed to automated testing systems. To implement the automated error classification and detection of Cmielowski into the Azapian system would be to use a known technique to improve and similar method in the same way. Therefore, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the claimed invention to modify Azapian to include the analysis models of Cmielowski. To do so would reduce manpower required to perform testing on new versions of an application.
Claims 12 and 13 are directed to the methods implemented by the systems of claims 2 and 3 respectively and are rejected for the same reasons as claims 2 and 3 respectively.
Claim 20 is directed to an article of manufacture containing code that implements the system of claim 2 and is rejected for the same reasons as claim 2.
Allowable Subject Matter
Claims 4, 9 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAWRENCE STEFAN GALKA whose telephone number is (571)270-1386. The examiner can normally be reached M-F 6-9 & 12-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Lewis can be reached at 571-272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LAWRENCE S GALKA/Primary Examiner, Art Unit 3715