Prosecution Insights
Last updated: April 19, 2026
Application No. 18/515,994

LEARNING SYSTEM AND METHOD FOR AUTOMATIC OBSERVING SERVICE

Non-Final OA §101§103§112
Filed
Nov 21, 2023
Examiner
KIM, KEVIN Y
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Gwangju Institute of Science and Technology
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
94%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
728 granted / 934 resolved
+7.9% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
35 currently pending
Career history
969
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
40.6%
+0.6% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
15.0%
-25.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 934 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) a learning system wherein an artificial neural network is trained to predict an observing viewport. This judicial exception is not integrated into a practical application because the claims do not include any elements that transform the abstract idea into patent-eligible matter. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because they amount to implementing the abstract idea on a computer and do not improve the functioning of the computer. In the instant application, the claims are directed to using a generic machine learning algorithm, i.e., an artificial neural network model that trains to predict an observing viewport. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because they do not improve the machine learning process. Rather, the claims merely utilize machine learning in a new data environment and do not improve upon the machine learning algorithm. The courts have stated that claims directed to the use of machine learning in a new environment that do not improve the machine learning technology is not patent-eligible. The courts have further stated that “the claimed methods are not rendered patent eligible by the fact that (using existing machine learning technology) they perform a task previously undertaken by humans with greater speed and efficient than could previously be achieved.” See Recentive Analytics, Inc. v. Fox Corp. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 7, and 11 recite the limitation “abstracting a game screen.” There is no explanation or definition in the claims as to what is meant by “abstracting,” and the dictionary definition of abstracting (consider something theoretically or separately from) fails to clarify the limitation. Similar language can be found in claims 2, 8, and 12. Because the claims and specification fail to define what “abstracting” does to game screen, the Examiner will interpret all instances of the term “abstract[ing]” as meaning “handling data” until further clarification and correction can be provided by Applicant. Claims 1, 7, and 11 recite the limitation "the game screens." There is insufficient antecedent basis for this limitation in the claim. The claims recite a singular game screen, and therefore, it is unclear whether the claims are missing limitations disclosing a plurality of game screens, or whether the limitation is a typographical error and refers to the singular game screen claimed earlier. The Examiner will assume that the claim is referring to a single game screen until clarification and correction is provided by Applicant. Claims 3-6, 9-10, and 13-17 are dependent on the above claims and inherit these deficiencies. Claims 2, 8, and 12 recite the limitation "the same." There is insufficient antecedent basis for this limitation in the claim. Claims 4 and 14 reference “the observation area.” However, claims 3 and 13 have disclosed multiple observation areas with the language “each collected observation area.” It is unclear which observation area is being referenced in claims 4 and 14 since there are multiple observation data collection portions each collecting observation area information. Furthermore, due to these different recitations, it is unclear whether “the observation area” refers to “each collected observation area,” or whether it refers to a singular observation area that was not claimed (and therefore lacking antecedent basis). The Examiner will assume that the limitation refers to the collected observation area information until clarification is provided. Claim 10 recites the limitation "the observation areas." There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-17 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al (US 2020/0027019) in view of Weinzaepfel et al (US 2020/0364509). Re claim 1, Yang discloses a learning system comprising: a game data processing portion that generates game input data by abstracting a game screen of a frame configuring a game video file for learning (fig. 6 and par. [0009]); an observing data processing portion that generates a plurality of human data based on observation areas each selected by a plurality of humans on the game screen ([0009], [0022], and [0335]); and an artificial neural network model that trains to predict an observing viewport, which is an area of human interest among the game screens, based on the game input data and plurality of human data ([0278], [0343]). However, Yang does not disclose masking the data. Weinzaepfel teaches a neural network used with reference images depicting an object-of-interest in a predetermined environment, similar to the observation area and observing viewport of Yang ([0152], [0163]). Furthermore, Weinzaepfel utilizes a mask for an object-of-interest in the frames of training data ([0177]). It would have been obvious to utilize data masking as taught by Weinzaepfel with the neural network of Yang in order to protect information while retaining its usefulness as data for the system. Re claim 2, Yang discloses a plurality of observation data collection portions collecting observation area information selected by the plurality of humans in response to the game input data and outputs the human data corresponding to each collected observation area to the artificial neural network model (see fig. 9, the neural network receives input from a plurality of humans on their devices). Weinzaepfel has been discussed above regarding masking data. Re claim 3, Yang discloses the observing data processing portion comprises a plurality of observation data collection portions, wherein each observation data collection portion collects observation area information selected by the plurality of humans in response to the game input data and outputs the human data corresponding to each collected observation data to the artificial neural network model ([0335], [0343], [0348], and [0350]). Re claim 4, Weinzaepfel teaches the masked human data comprising masked data corresponding to the observation area, a class type of the masked data, and coordinate information indicating a location of the masked data ([0177], the masked data is for an object-of-interest in the frame, the masked data therefore corresponds to the observation area, is a class type of training data, and includes coordinate information in the form of a bounding box defined by pixels in the frame). Re claim 5, Weinzaepfel teaches implementing MASK R-CNN ([0058], [0063], and [0134]). Re claim 6, Yang has disclosed learning an observation pattern of an area of human interest among the game screens (see the above rejections, wherein Yang uses point of interest data to train the neural network, points of interest being considered an observation pattern of an area of human interest). Re claims 7-17, see the rejections to claims 1-6. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kevin Y Kim whose telephone number is (571)270-3215. The examiner can normally be reached Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEVIN Y KIM/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Nov 21, 2023
Application Filed
Dec 04, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599840
VIRTUAL CHARACTER CONTROL METHOD AND APPARATUS, STORAGE MEDIUM, AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12594492
HANDHELD CONTROLLER WITH HAND DETECTION SENSORS
2y 5m to grant Granted Apr 07, 2026
Patent 12589277
Smart Sports Result Implications
2y 5m to grant Granted Mar 31, 2026
Patent 12582903
NON-TRANSITORY COMPUTER READABLE MEDIUM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12569759
INFORMATION PROCESSING APPARATUS, GAME VIDEO EDITING METHOD, AND METADATA SERVER
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
94%
With Interview (+16.2%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 934 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month