Prosecution Insights
Last updated: April 19, 2026
Application No. 18/787,988

NON-TRANSITORY STORAGE MEDIUM STORING SUPERVISED DATA GENERATION PROGRAM, SUPERVISED DATA GENERATION METHOD, SUPERVISED DATA GENERATION APPARATUS, TRAINING APPARATUS, AND DATA STRUCTURE OF SUPERVISED DATA

Non-Final OA §101§103
Filed
Jul 29, 2024
Examiner
SHENG, XIN
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Noah Solution Inc.
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
290 granted / 401 resolved
+10.3% vs TC avg
Strong +17% interview lift
Without
With
+17.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
17 currently pending
Career history
418
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
75.0%
+35.0% vs TC avg
§102
2.2%
-37.8% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 401 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . CLAIM INTERPRETATION The following is a quotation of 35 U.S.C. 112(f): (FP 7.30.03) (f) ELEMENT IN CLAIM FOR A COMBINATION.—An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as "configured to" or "so that"; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. (FP 7.30.05) This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) (Claims 4-5) is/are: background image generator configured to select a first target image from an image group in claim 4 & 5; supervised data generator configured to select a second target image from the image group in claim 4 & 5; trained model generator configured to generate, ,,,, a trained model in claim 5; Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. (FP 7.30.06) Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim is directed to data per se (organized information) without a statutory category or technical improvement, which falls outside the scope of patentable subject matter (see MPEP § 2106.03). Claim 6 recites that it is a "data structure of supervised data”. A data structure is generally considered information — an arrangement of data — which is not itself a statutory category unless it is claimed as part of a tangible medium (e.g., “a non-transitory computer readable medium storing a data structure…”). Applicant can amend to recite “A non-transitory computer readable medium storing a data structure of supervised data comprising or integrated into a practical application. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7 are rejected under 35 U.S.C. 103 as being unpatentable over Case et al (US20230290110). Regarding Claim 1. Case teaches A non-transitory storage medium storing a supervised data generation program for generating supervised data to generate a trained model for outputting a result from identifying a target in response to input image data of an image including the target corresponding to a target image (Case, abstract, the invention describes systems and methods for generating synthetic training data for machine learning models. Images of a particular object (such as an aircraft) can be received and processed to cutout the object (i.e., separate the object from the background) from the received image. The systems and methods described herein can detect areas in the background images to place an object. Once a suitable area has been detected, the cutout object image can be superimposed on the background image at the location determined to be suitable for placing the object. Superimposing the object onto the background image can include blending the two images using a plurality of blending techniques to reduce artifacts that may bias a supervised training process. [0108] FIG. 10 illustrates an example of a computing system 1000, in accordance with some examples of the disclosure. System 1000 can be a client or a server. As shown in FIG. 10. system 1000 can be any suitable type of processor-based system, such as a personal computer, workstation, server, handheld computing device (portable electronic device) such as a phone or tablet, or dedicated device. The system 1000 can include, for example, one or more of input device 1020, output device 1030, one or more processors 1010, storage 1040, and communication device 1060. Input device 1020 and output device 1030 can generally correspond to those described above and can either be connectable or integrated with the computer. [0110] Storage 1040 can be any suitable device that provides storage, such as an electrical, magnetic, or optical memory including a RAM, cache, hard drive, removable storage disk, or other non-transitory computer readable medium.), the program causing a computer to perform operations comprising: selecting a first target image from an image group including a plurality of different target images and performing a transformation process to generate a background image (Case, [0008] In one or more examples, a plurality of background images (i.e., terrain images) can also be received. In the context of aircraft, in one or more examples, the background images can include satellite images of terrain where an aircraft might be found. [0016] Optionally, determining a placement area of the first background image to place the first cutout object image upon comprises generating a half-tone representation of the first background image. Case didn’t explicitly teach a transformation process. However, Case described a half-tone background image created during the process of cutting out target object image. Therefore, it is obvious to a person with ordinary skill in the art that creating a half-tone representation of an image is similar to a transformation process.); and selecting a second target image from the image group and combining the second target image with the background image to generate supervised data (Case, [0107] The systems and methods described above can be used to generate a robust training data set for use in a supervised training process. The aircraft cutouts and background images used to generate the synthetic training data set can be selected to reflect different contexts (i.e., lighting, terrain conditions, environments) thereby making the machine learning classifier more robust and accurate in a variety of contexts.). Regarding Claim 2. Case further teaches The supervised data generation program according to claim 1, wherein the transformation process is performed on a divided image of the first target image (Case, [0098] FIG. 9 illustrates an exemplary process for superimposing an object onto a background image for generate a synthetic training imaging according to examples of the disclosure. In one or more examples of the disclosure, the process 900 can begin at step 902 wherein the half-tone image generated by process 700 of FIG. 7 can be smoothed to remove small clusters of white pixels that are more likely to be noise than objects from the half-tone image. In one or more examples, the half-tone image produced in the example process 700 of FIG. 7 may include stray white pixels that deviate from the pixels immediately surround them. These stray pixels are not likely actual obstructions Gust as stray black pixels are not likely part of the terrain), and thus they may be removed in order to improve the accuracy of using the half-tone image to determine where to superimpose an aircraft cutout onto a background image. In one or more examples, a greedy method can be applied to a generated half-tone image that makes areas that are predominantly black or white more uniform. In one or more examples, the greedy method can include dividing the image into a plurality of square segments and if the number of black pixels falls below a certain threshold they can be set to white, and if the number of white pixels falls below a certain threshold they can be set to black. In this way, the smoothing can increase the available area of a background image to place an aircraft whereas without smoothing an area of a background image may have been excluded due to errant objects detected in the image.). Claim 3 is similar in scope as Claim 1, and thus is rejected under same rationale. Claim 4 is similar in scope as Claim 1, and thus is rejected under same rationale. Regarding Claim 5. Case further teaches A training apparatus (Case, [0108] FIG. 10 illustrates an example of a computing system 1000, in accordance with some examples of the disclosure. System 1000 can be a client or a server. As shown in FIG. 10. system 1000 can be any suitable type of processor-based system, such as a personal computer, workstation, server, handheld computing device (portable electronic device) such as a phone or tablet, or dedicated device. The system 1000 can include, for example, one or more of input device 1020, output device 1030, one or more processors 1010, storage 1040, and communication device 1060. [0111] Processor(s) 1010 can be any suitable processor or combination of processors, including any of, or any combination of, a central processing unit (CPU), field programmable gate array (FPGA), and application-specific integrated circuit (ASIC). Software 1050, which can be stored in storage 1040 and executed by one or more processors 1010, can include, for example, the programming that embodies the functionality or portions of the functionality of the present disclosure (e.g., as embodied in the devices as described above). Therefore, the software 1050 functions similar to a background image generator, a supervised data generator and a trained model generator.), comprising: a background image generator configured to select a first target image from an image group including a plurality of different target images and perform a transformation process to generate a background image (Case, [0008] In one or more examples, a plurality of background images (i.e., terrain images) can also be received. In the context of aircraft, in one or more examples, the background images can include satellite images of terrain where an aircraft might be found. [0016] Optionally, determining a placement area of the first background image to place the first cutout object image upon comprises generating a half-tone representation of the first background image. Case didn’t explicitly teach a transformation process. However, Case described a half-tone background image created during the process of cutting out target object image. Therefore, it is obvious to a person with ordinary skill in the art that creating a half-tone representation of an image is similar to a transformation process.); a supervised data generator configured to select a second target image from the image group and combine the second target image with the background image to generate a plurality of supervised data pieces (Case, [0107] The systems and methods described above can be used to generate a robust training data set for use in a supervised training process. The aircraft cutouts and background images used to generate the synthetic training data set can be selected to reflect different contexts (i.e., lighting, terrain conditions, environments) thereby making the machine learning classifier more robust and accurate in a variety of contexts.); and a trained model generator configured to generate, with the plurality of supervised data pieces, a trained model for outputting a result from identifying a target in response to input image data of an image including the target corresponding to a target image (Case, [0070] A boneyard satellite image, such as the image 100 with annotations 102 can be used as part of a supervised training process configured to train a classifier to automatically detect the presence of aircraft in a satellite image. FIG. 2 illustrates an exemplary supervised training process for generating a machine learning model according to examples of the disclosure. In the example of FIG. 2, the process 200 can begin at step 202 wherein a particular characteristic for a given binary machine learning classifier is selected or determined (such as the presence of an aircraft, aircraft type, orientation of an aircraft, etc.). In one or more examples, step 402 can be optional, as the selection of characteristics needed for the machine learning classifiers can be selected beforehand in a separate process. [0073] In one or more examples, and in the case of segmentation or region-based classifiers such as region based convolutional neural networks (R-CNNs), the training images can be annotated on a pixel-by-pixel or regional basis to identify the specific pixels or regions of an image that contain specific characteristics. For instance in the case of R-CNNs, the annotations can take the form of bounding boxes or segmentations of the training images.). Regarding Claim 6. Case further teaches A data structure of supervised data (Case, [0060] In one or more examples, the process for generating synthetic training data can include receiving one or more background images. Case didn’t explicitly describe a data structure. However, it is obvious to a person with ordinary skill in the art that the generated synthetic training data has some format to include background images with target of interest.), comprising: image data of a supervised image for generating a trained model for outputting a result from identifying a target in response to input image data of an image including the target corresponding to a target image (Case, [0072] In one or more examples, if the training images received at step 204 do not include identifiers, then the process can move to step 206 wherein one or more identifiers are applied to each image of the one or more training images. In one or more examples, the training images can be annotated with identifiers using a variety of methods. For instance, in one or more examples, the training images can be manually applied by a human or humans who view each training image, determine what characteristics are contained within the image, and then annotate the image with the identifiers pertaining to those characteristics. Alternatively or additionally, the training images can be harvested from images that have been previously classified by a machine classifier. [0075] FIG. 3 illustrates an exemplary process for generating synthetic training images for a machine learning model according to examples of the disclosure. In one or more examples, the process 300 of FIG. 3 can represent a process for generating one or more "synthetic" training images, by obtaining images of aircraft from a variety of sources (described in detail below) and superimposing the images on a plurality of background/terrain images in a manner that provides a machine learning classifier realistic training data (so as to not bias the classifier due to artifacts created by the generation of the synthetic images). [0107] The systems and methods described above can be used to generate a robust training data set for use in a supervised training process. The aircraft cutouts and background images used to generate the synthetic training data set can be selected to reflect different contexts (i.e., lighting, terrain conditions, environments) thereby making the machine learning classifier more robust and accurate in a variety of contexts.), wherein the supervised image includes a second target image selected from an image group including a plurality of the target images corresponding to the target to be identified from one another by the trained model (Case, [0075] FIG. 3 illustrates an exemplary process for generating synthetic training images for a machine learning model according to examples of the disclosure. In one or more examples, the process 300 of FIG. 3 can represent a process for generating one or more "synthetic" training images, by obtaining images of aircraft from a variety of sources (described in detail below) and superimposing the images on a plurality of background/terrain images in a manner that provides a machine learning classifier realistic training data (so as to not bias the classifier due to artifacts created by the generation of the synthetic images).), and a background image located around the second target image (Case, [0076] In one or more examples, the process 300 of FIG.3 can begin at step 302, wherein one or more images of aircraft are received. In one or more examples, the images received at step 302 can include overhead views of aircraft generated from a plurality of sources. … In one or more examples, the images obtained at step 302 can include actual satellite images collected from satellite imagery from boneyards or other airfields. In one or more examples, the satellite images themselves can be used to train a machine learning classifier. Additionally or alternatively, in one or more examples, the satellite images of aircraft can be used as "seeds" to create a plurality of synthetic images.), and the background image includes a transformed image resulting from a transformation process performed on a first target image selected from the image group (Case, [0075] FIG. 3 illustrates an exemplary process for generating synthetic training images for a machine learning model according to examples of the disclosure. In one or more examples, the process 300 of FIG. 3 can represent a process for generating one or more "synthetic" training images, by obtaining images of aircraft from a variety of sources (described in detail below) and superimposing the images on a plurality of background/terrain images in a manner that provides a machine learning classifier realistic training data (so as to not bias the classifier due to artifacts created by the generation of the synthetic images). [0076] In one or more examples, the process 300 of FIG.3 can begin at step 302, wherein one or more images of aircraft are received. In one or more examples, the images received at step 302 can include overhead views of aircraft generated from a plurality of sources. … In one or more examples, the images obtained at step 302 can include actual satellite images collected from satellite imagery from boneyards or other airfields. In one or more examples, the satellite images themselves can be used to train a machine learning classifier. Additionally or alternatively, in one or more examples, the satellite images of aircraft can be used as "seeds" to create a plurality of synthetic images. As explained in further detail below, in one or more examples, aircraft captured in a satellite image can be "cutout" and used to create a plurality of synthetic images. Thus, in one or more examples, a particular image taken from a satellite image, rather than simply creating a single training image, can be used to create multiple training images.). Regarding Claim 7. Case further teaches The data structure according to claim 6, further comprising: positional information about the second target image with respect to the background image (Case, [0106] Returning to the example of FIG. 3, once the aircraft cutout image has been superimposed onto the background image at step 312, the process can move to step 314 wherein the synthetic training image is automatically annotated with the location of the aircraft in the image for the purpose of providing the information to the supervised training process. In one or more examples, annotating the training image can include indicating a location of a bounding box where the aircraft is located (for instance by appending the information to the metadata of the image) so that during the supervised training process, the machine learning classifier can learn from the image by knowing where the aircraft in the image being used to train the classifier is located. In this way, in addition to multiplying the amount of training data available to train a machine learning classifier, the process 300 can also reduce the amount of effort required to annotate a training data set.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIN SHENG whose telephone number is (571)272-5734. The examiner can normally be reached M-F 9:30AM-3:30PM 6:00PM-8:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 5712723022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Xin Sheng/ Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Jul 29, 2024
Application Filed
Mar 19, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603971
PROVIDING AWARENESS OF WHO CAN HEAR AUDIO IN A VIRTUAL CONFERENCE, AND APPLICATIONS THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12602861
IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12592030
INTERACTIVE THREE-DIMENSION AWARE TEXT-TO-IMAGE GENERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12579920
ADAPTING A USER INTERFACE RESPONSIVE TO SCREEN SIZE ADJUSTMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12555343
3D MODEL GENERATION USING MULTIMODAL GENERATIVE AI
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
90%
With Interview (+17.3%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 401 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month