Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is in response to the amendment filed on November 18th, 2025. Claim 1 has been amended, applicant cancelled claims 2-20, and added claims 21-39. The amended claims limitations have been fully considered and have overcome the 112(b) and 102 rejections. Office action has been updated to reflect amended claims. Claims 1 and 21-39 remain rejected in the application.
Response to Arguments
In response to applicant’s argument regarding the new amendments and added limitations not found in the references. The office action as been updated to reflect the newly amended claims as well as newly found prior art to support the office action. Niinuma and Terry explicitly teach the amended claim limitations [Terry: 0138 “A confidence score”]; when the confidence value associated with the output of a machine learning model satisfies a threshold [Terry: 0148 “may be employed when the deployed model falls below a confidence threshold”], receiving additional user image data [Terry: 0174 “receive information from the network”]. Claims 1 and 21-39 remain rejected in the application.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 30, 34, and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Niinuma et al. (U.S. Patent Publication No. 2023/0029505), in view of Terry et al. (U.S. Patent Publication No. 2019/0180195).
Regarding claim 1, Niinuma discloses a computer-implemented method comprising: receiving user image data (interpreted as get images of the specific user)[Niinuma: 0033 “At block 310, an image of a subject may be obtained that includes at least the face of the subject. The image of the subject may be obtained through any method wherein the final result is a 2D image of at least the face of the subject”](teaches obtaining a subjects facial image), generating a 3D topology based at least in part on features extracted from the received additional user image data (interpreted as build a 3D representation using features derived from the additional images)[Niinuma: 0035 “Additionally or alternatively, a 3D registration may be made of the input image and/or the new image (e.g., the image depicting the AU combination and category of intensity for which the additional image is being synthesized) to facilitate synthesis of a high-quality image. In some embodiments, one or more loss parameters may be utilized when synthesizing the images to facilitate generation of high-quality images.”](teaches forming a 3D rendering/registration of the subjects face from the subjects image); generating personalized training data based on the generated 3D topology [Niinuma: 0030 “Based on the 3D registration, the synthesized images 230b may be performed using the input images 210b as the base.”], and updating the machine learning model using the personalized training data (interpreted as fine tuning the model with the personalized training data)[Niinuma: 0021 “the machine learning system 130 may be further trained, tuned”](fine tuning is the exact same process as the claimed limitation), but fails to explicitly disclose providing features extracted from the user image data to a machine learning model; in response to providing the features extracted from the user image data to the machine learning model, obtaining a confidence value associated with an output of the machine learning model; when the confidence value associated with the output of a machine learning model satisfies a threshold, receiving additional user image data.
However, Terry discloses providing features extracted from the user image data to a machine learning model (interpreted as extract features from the image data and input those features into an ML model)[Terry: 0143 “after the training set has been collected, features are defined and extracted”](these are the features extracted for training the model), in response to providing the features extracted from the user image data to the machine learning model, obtaining a confidence value associated with an output of the machine learning model [Terry: 0138 “A confidence score”]; when the confidence value associated with the output of a machine learning model satisfies a threshold [Terry: 0148 “may be employed when the deployed model falls below a confidence threshold”], receiving additional user image data [Terry: 0174 “receive information from the network”].
Niinuma and Terry are considered analogous to the claimed invention because they are in the same field of utilizing machine learning for working with image data. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Niinuma to incorporate Terry’s teachings of utilizing a confidence score and threshold. The motivation for such a combination would provide the benefit of improving reliability and quality.
Regarding claim 30, Niinuma and Terry disclose the method of claim 1, comprising: receiving personalized training data (interpreted as the training pipeline obtains a per user training set)[Niinuma: 0013 “The personalized training dataset may be used to train a machine learning system”][Niinuma: 0036 “At block 340, the new images synthesized at block 330 may be added to a dataset.”][Niinuma: 0037 “At block 350, the machine learning system may be trained using the dataset generated at block 340.”](teaches a personalized training dataset that is provided to the machine learning system for training. The training using the dataset necessarily means the system receives that personalized data as input), and generating a personalized machine learning model by training a general purpose machine learning model based on the personalized training data and based on baseline training data [Niinuma: 0013 “the machine learning system may first be trained generically to be applicable to any person, and may afterwards be tuned or further trained based on the images of the specific individual to become personalized.”](teaches a generic model then personalization using the users images, which corresponds to baseline training followed by fine tuning personalized data to yield a personalized model).
Claims 34 and 39 are system and computer readable medium claims corresponding to method claim 1 without any additional limitations. Thus, claims 34 and 39 are rejected for the same reasons as claim 1 above.
Claims 21, 22, 35, and 36 are rejected under 35 U.S.C. 103 as being unpatentable over Niinuma et al. (U.S. Patent Publication No. 2023/0029505), in view of Terry et al. (U.S. Patent Publication No. 2019/0180195), in further view of Chen et al. (U.S. Patent No. 10,796,480).
Regarding claim 21, Niinuma and Terry disclose the method according to claim 1, but fail to explicitly disclose wherein the 3D topology comprises a facial mesh based on real-world skin, wherein the received additional user image data is representative of the real-world skin.
However, Chen discloses wherein the 3D topology comprises a facial mesh based on
real-world skin, (interpreted as the computational model has two parts, geometry (facial mesh) and appearance (skin/texture) taken from the real person)(Chen: Col. 10, Lines 33-35 “FIG. 21 shows an example illustration of a process of transferring the texture map from the raw scan data to the registered head model”)(Chen: Col. 16, Lines 63-65 “Generating a good appearance model (i.e. the texture map) is another challenging task. A high quality texture map plays an important role in realistic rendering.”)(Chen: Col. 26, Lines 3-6 “This image or video is then used as the input to the later modules in the system to reconstruct or digitise a 3D face model and a corresponding texture map.”)(teaches generating a registered 3d headmesh and transfers a texture map from user capture to that mesh, the mesh corresponds to facial geometry and the mapped texture corresponds to the skin based on real world skin), wherein the received additional user image data is representative of the real-world
skin (interpreted as the skin/texture is derived from the users own images)(Chen: Col. 25, Line 67 and Col. 26, Lines 1-6 “a user has to take a selfie (Section 2 .1) or a short video of their head turning from one side to the other (Section 2.2 and 2.3), typically using the front facing camera of a mobile device. This image or video is then used as the input to the later modules in the system to reconstruct or digitise a 3D face model and a corresponding texture map.”)(teaches the users selfie/video is the source for the texture map that becomes the skin on the model).
Niinuma, Terry, and Chen are considered analogous to the claimed invention because they are in the same field of subject specific facial dataset generation using 3d facial models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Niinuma and Terry to incorporate Chen’s teachings of transferring a user’s real skin texture map onto the user specific 3d facial mesh. The motivation for such a combination would provide the benefit of reducing the synthetic to real domain gap and improve recognition accuracy by training on images rendered from realistic albedo/texture.
Regarding claim 22, Niinuma, Terry, and Chen disclose the method according to claim 21, wherein the personalized training data is labelled based on the 3D topology (interpreted as the training data you generate for the user is labeled and those labels derive from the same computational model used to synthesize the data)[Niinuma: 0036 “the synthesized images with their labeled AU combinations and categories of intensity may be added to a dataset”][Niinuma: 0030 “Based on the 3D registration, the synthesized images 230b may be performed using the input images 210b as the base.”](Teaches synthesizing subject specific images using a computational model (3d registration), then adds those images to a dataset with labels (AU combinations and intensities). Because the labels are attached to images that were generated based on the 3d registration, the resulting labeled training data is based on the computational model).
Claim 35 is a system claim corresponding to method claim 21 without any additional limitations. Thus, claim 35 is rejected for the same reasons as claim 21 above.
Claim 36 is a system claim corresponding to method claim 22 without any additional limitations. Thus, claim 36 is rejected for the same reasons as claim 22 above.
Claims 23, 24, 37, and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Niinuma et al. (U.S. Patent Publication No. 2023/0029505), in view of Terry et al. (U.S. Patent Publication No. 2019/0180195), in further view of Rowell et al. (U.S. Patent No. 11,257,272).
Regarding claim 23, Niinuma and Terry disclose the method according to claim 1, but fail to explicitly disclose comprising: capturing a plurality of views of the 3D topology from different viewing angles; and capturing a plurality of views of the 3D topology under different lighting conditions.
However, Rowell discloses comprising: capturing a plurality of views of the 3D topology from different viewing angles (interpreted as make many images of the model by changing camera orientation) (Rowell: 1106; Fig. 11 “Incrementally varying at least one parameter included in the camera settings file to generate a series of camera views with each camera view having a unique image plane”)(Rowell: 514; Fig. 5 “Rotation Parameters yaw , pitch , roll”)(Rowell: Col. 5, Lines 47-51 “Routines for incrementally varying camera orientation ( e.g., position, objection depth, field of view, etc.) and camera capture settings (e.g., zoom, focus, baseline, etc.) may be implemented by systems and methods of the present invention to create hundreds or thousands of images depicting unique perspectives of a scene in minutes or seconds.”)(Teaches generating a series of camera views and controls yaw/pitch/roll); and capturing a plurality of views of the 3D topology under different lighting conditions (interpreted as render many images of the model while changing lighting, different lighting conditions corresponds to varying light position and number of sources)(Rowell: 745; Fig. 7 “Lighting - position, number oflight sources”)(Rowell: 1106; Fig. 11 “Incrementally varying at least one parameter included in the camera settings file to generate a series of camera views with each camera view having a unique image plane”)(teaches including lighting as an adjustable capture setting and generates a series of views. Varying light parameters across the series produces plurality of views under different lighting conditions).
Niinuma, Terry, and Rowell are considered analogous to the claimed invention because they are in the same field of subject specific facial dataset generation using 3d facial models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Niinuma and Terry to incorporate Rowell’s teachings of camera orientation and lighting parameters. The motivation for such a combination would provide the benefit of dataset diversification to improve pose and illumination accuracy.
Regarding claim 24, Niinuma and Terry disclose the method according to claim 23, but fail to explicitly disclose wherein the personalized training data is labelled based on camera placement, and/or lighting features used in capturing the views of the 3D topology.
However, Rowell discloses wherein the personalized training data is labelled based on camera placement (interpreted as labels include where the camera was placed or position and orientation)(Rowell: Col. 4, Lines 53-56 “Camera orientation and camera capture settings are also precisely defined in camera settings files to provide specific camera location and capture setting values for each synthetic image.”)(Rowell: Col. 38, Lines 3-40 “a camera setting file 500 includes camera position data 510 … The rotation parameters 514 may include Euler angles (i.e., pitch, yaw, and roll)”)(Rowell: Col. 33, Lines 4-6 “The image indexing module 108 can be software including routines for tagging and indexing synthetic image data by metadata fields.”)(teaches recording per image camera location and orientation in the camera settings file and then tags/indexes images by metadata fields), and/or lighting features (interpreted as number, position, and intensity of lights) used in capturing the views of the 3D topology (interpreted as labels also include lighting attributes used for capture)(Rowell: Col. 43, Lines 5-10 “one or more lighting setting 745 may describe the lighting conditions of a particular scene. Lighting settings 7 45 may define the number of light sources present in a scene, the position of each light source, and the intensity of light emitted from each light source.”)(Rowell: Col. 32, Lines 48-52 “The training database 905 may also include synthetic image data metadata describing characteristics of synthetic images and additional image data channels as well as scene metadata describing attributes of image scenes captured in synthetic images”)(teaches lighting settings used during capture and stores metadata for images/datasets. Combined with the tagging/indexing module, this teaches labeling training data based on lighting features used to capture each view).
Niinuma, Terry, and Rowell are considered analogous to the claimed invention because they are in the same field of subject specific facial dataset generation using 3d facial models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Niinuma and Terry to incorporate Rowell’s teachings of recording and using camera placement and lighting settings as per image metadata, thereby labeling the personalized training data based on the camera placement and/or lighting features used to capture each rendered view. The motivation for such a combination would provide the benefit of improving dataset curation and reproducibility, enable pose and illumination aware training and evaluation, and allow downstream selection or weighting by viewpoint and lighting, yielding predictable performance gains.
Claim 37 is a system claim corresponding to method claim 23 without any additional limitations. Thus, claim 37 is rejected for the same reasons as claim 23 above.
Claim 38 is a system claim corresponding to method claim 24 without any additional limitations. Thus, claim 38 is rejected for the same reasons as claim 24 above.
Claims 25 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Niinuma et al. (U.S. Patent Publication No. 2023/0029505), in view of Terry et al. (U.S. Patent Publication No. 2019/0180195), in further view of Kaliouby et al. (U.S. Patent No. 11,151,610).
Regarding claim 25, Niinuma and Terry disclose the method according to claim 1, wherein the personalized training data is based on a parameterized input (interpreted as personalized training set is generated using inputs defined by parameters)[Niinuma: 0012 “The present disclosure relates to the generation of a personalized dataset that may be used to train a machine learning system”][Niinuma: 0015 “provide a more personalized dataset for training such that the machine learning system is better able to identify and classify the facial expression of an input image to the machine learning system because it has been trained based on images of an individual, rather than generically trained using a variety of images of a variety of individuals.”](teaches personalized training data and it’s obvious to have the data be based on parameterized input), but fail to explicitly disclose wherein the parameterized input represents an area of interest of a user.
However, Kaliouby discloses wherein the parameterized input represents an area of interest of a user (interpreted as the parameterized input encodes a user specific region)(Kaliouby: Col. 5, Lines 57-59 “The method further includes establishing a region of interest including the face, separating pixels in the region of interest”)(teaches the region of interest which corresponds to area of interest, includes the face of the user. Region of interests are parameterized and directly used in the processing/training pipeline).
Niinuma, Terry, and Kaliouby are considered analogous to the claimed invention because they are in the same field of subject specific facial dataset generation using defined regions. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Niinuma and Terry to incorporate Kaliouby’s teachings of recording and using camera placement and lighting settings as per image metadata, thereby labeling the personalized training data based on the camera placement and/or lighting features used to capture each rendered view. The motivation for such a combination would provide the benefit of constrain data generation to the informative facial sub regions to improve signal to noise, reduce background bias, lower compute, and predictably enhance model accuracy.
Regarding claim 26, Niinuma, Terry, and Kaliouby disclose the method of claim 25, wherein the parameterized input represents at least one of: a facial feature of the user [Niinuma: 0014 “image facial expression may be classified as, or including, a smile”](teaches using facial expressions such as a smile which corresponds to facial feature and only one parameter needs to be met), facial hair of the user, hair of the user, an item of clothing worn by the user, an accessory worn by the user, glasses worn by the user, or a hat worn by the user.
Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Niinuma et al. (U.S. Patent Publication No. 2023/0029505), in view of Terry et al. (U.S. Patent Publication No. 2019/0180195), in view of Kaliouby et al. (U.S. Patent No. 11,151,610), in further view of Rowell et al. (U.S. Patent No. 11,257,272).
Regarding claim 27, Niinuma, Terry, and Kaliouby disclose the method of claim 25, but fail to explicitly disclose comprising: capturing a plurality of views of the 3D topology, wherein the 3D topology is based on the parameterized input, wherein the personalized training data is labelled based on the parameterized input.
However, Rowell discloses comprising: capturing a plurality of views of the 3D topology, wherein the 3D topology is based on the parameterized input, wherein the personalized training data is labelled based on the parameterized input (Rowell: Abstract “Realistic perspectives captured in synthetic images are defined by camera views created from camera settings files . To simulate capture performance of smartphone cameras , stereo cameras , and other actual camera devices , capture , calibration , and camera intrinsic parameters included in camera settings files are identical to parameters included in actual cameras”)(teaches perspectives which corresponds to views that are defined views in camera settings based on intrinsic parameters corresponding to parameterized input) .
Niinuma, Terry, Kaliouby, and Rowell are considered analogous to the claimed invention because they are in the same field of subject specific facial dataset generation using defined regions. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Niinuma, Terry, and Kaliouby to incorporate Rowell’s teachings of multi view rendering pipeline. The motivation for such a combination would provide the benefit of focus data on discriminative facial regions, improve pose, and illumination robustness, and simplify metadata labeling, yielding predictable accuracy gains.
Claims 28 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Niinuma et al. (U.S. Patent Publication No. 2023/0029505), in view of Terry et al. (U.S. Patent Publication No. 2019/0180195), in view of Kaliouby et al. (U.S. Patent No. 11,151,610), in further view of McDuff et al. (U.S. Patent No. 11,232,290).
Regarding claim 28, Niinuma, Terry, and Kaliouby disclose the method of claim 25, but fail to explicitly disclose wherein the received user image data is processed to crop and select an area of interest of the user, wherein the area of interest comprises the user's face and other surrounding user features.
However, McDuff discloses wherein the received user image data is processed to crop and select an area of interest of the user (interpreted as define a region of interest and operate on that sub image)(McDuff: Col. 11, Lines 53-57 “The flow 200 includes defining a region of interest (ROI) 220 in the image that includes the face. The region of interest can be located in a face based on facial landmark points such as edges of nostrils, edges of a mouth, edges of eyes, etc.”)(teaches defining a region of interest on the users image is selection of an area of interest, feature extraction from that region constitutes processing of the cropped/selected sub image), wherein the area of interest comprises the user's face and other surrounding user features (McDuff: Col. 11, Lines 62-65 “The flow 200 includes computing a set of facial metrics 240 based on the one or more HoG features. The facial metrics can be used to identify the locations of facial features such as a nose, a mouth, eyes, ears, and so on.”)(teaches that the region of interest includes the face and the processing identities ears, eyes, nose, and mouth features surrounding the face. Thus, the area of interest comprises the users face and other surrounding user features).
Niinuma, Terry, Kaliouby, and McDuff are considered analogous to the claimed invention because they both address subject specific facial dataset generation and region of interest selection for machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Niinuma, Terry, and Kaliouby to incorporate McDuff’s teachings of landmark based region of interest cropping. The motivation for such a combination would provide the benefit of efficiency and accuracy, tighter crops result in lower compute and improved performance gains.
Regarding claim 29, Niinuma, Terry, and Kaliouby disclose the method of claim 28, but fail to explicitly disclose wherein the area of interest comprises the user's face and at least one of: a facial feature of the user, facial hair of the user, hair of the user, an item of clothing worn by the user, an accessory worn by the user, glasses worn by the user, and a hat worn by the user.
However, McDuff discloses wherein the area of interest comprises the user's face and at least one of: a facial feature of the user (McDuff: Col. 6, Lines 52-55 “The facial landmark detection can include detecting the edges of eyebrows, the comers of a mouth, the tip of a nose, the edges of eyes, etc.”) (teaches facial features within the region of interest), facial hair of the user (McDuff: Col. 6, Lines 56-58 “The features that can be extracted within the face can include eyebrows, eyes, a nose, a mouth, and so on.”)(only one needs to be disclosed and we have at least 2 disclosed), hair of the user, an item of clothing worn by the user, an accessory worn by the user, glasses worn by the user, and a hat worn by the user.
Niinuma, Terry, Kaliouby, and McDuff are considered analogous to the claimed invention because they both address subject specific facial dataset generation and region of interest selection for machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Niinuma, Terry, and Kaliouby to incorporate McDuff’s teachings of region of interest with facial features. The motivation for such a combination would provide the benefit of focusing on facial attributes.
Claims 31 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Niinuma et al. (U.S. Patent Publication No. 2023/0029505), in view of Terry et al. (U.S. Patent Publication No. 2019/0180195), in further view of in view of Usatov et al. (U.S. Patent Publication No. 2023/0186175).
Regarding claim 31, Niinuma and Terry disclose the method of claim 30, but fail to explicitly disclose wherein the personalized machine learning model replaces a previous machine learning model based on a comparison between the generated personalized machine learning model and the previous machine learning model.
However, Usatov discloses wherein the personalized machine learning model replaces a previous machine learning model (interpreted as, after training, the new model replaces the old model)[Usatov: 0007 “The system can establish the second model as the primary model in the deployment to replace the first model in the deployment.”](teaches replacing the older model with the new model) based on a comparison between the generated personalized machine learning model and the previous machine learning model [Usatov: 0012 “determining, based on a comparison of a first model that is deployed as a primary model with a second model that is acting as a challenger model, that the second model performs better than the first model based on at least one performance metric. The method can include determining, based on a comparison of a characteristic of the first model with a characteristic of the second model, to skip a validation process for the second model. The method can include establishing the second model as the primary model in the deployment to replace the first model in the deployment”](teaches comparing models on defined metrics and replaces the prior model when the challenger outperforms it).
Niinuma, Terry, and Usatov are considered analogous to the claimed invention because they both address creating a personalized machine learning model from personalized training data. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Niinuma and Terry to incorporate Usatov’s teachings of replacing machine learning models with better models. The motivation for such a combination would provide the benefit of consistently using the best performing machine learning model.
Regarding claim 32, Niinuma and Terry disclose the method of claim 31, but fail to explicitly disclose wherein the comparison comprises determining which machine learning model provides more accurate outputs.
However, Usatov discloses wherein the comparison comprises determining which machine learning model provides more accurate outputs [Usatov: 0027 “users can view accuracy value comparison of models using various accuracy metrics, such as Dual Lift Charts, Feature Impact Comparison, and/or Row level prediction difference between models.”][Usatov: 0032 “the metrics compared may include speed of performance, accuracy, and/or computation resource utilization.”][Usatov: 0035 “the second model may produce more accurate results and/or produce results faster”](teaches comparing models by accuracy and identifying the one that is more accurate).
Niinuma, Terry, and Usatov are considered analogous to the claimed invention because they both address creating a personalized machine learning model from personalized training data. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Niinuma and Terry to incorporate Usatov’s teachings of comparing machine learning models based on accuracy. The motivation for such a combination would provide the benefit of utilizing the most accurate machine learning model.
Claim 33 is rejected under 35 U.S.C. 103 as being unpatentable over Niinuma et al. (U.S. Patent Publication No. 2023/0029505), in view of Terry et al. (U.S. Patent Publication No. 2019/0180195), in view of Usatov et al. (U.S. Patent Publication No. 2023/0186175), in further view of Grove et al. (U.S. Patent No. 10,163,061).
Regarding claim 33, Niinuma, Terry, and Usatov disclose the method of claim 32, but fail to explicitly disclose comprising: conducting an inference step using a first machine learning model, and wherein the personalized machine learning model is generated based on a determination that an output of the inference step is below a threshold accuracy level.
However, Grove discloses comprising: conducting an inference step using a first machine learning model (Grove: Col. 1, lines 51-53 “evaluate the machine learning model at least by running the machine learning model on a processor with the training example data”)(teaches running a model to evaluate it which is an inference step), and wherein the personalized machine learning model is generated based on a determination that an output of the inference step is below a threshold accuracy level (interpreted as if the first models output quality is below a threshold, trigger training to generate a personalized model)(Grove: Col. 19, Lines 43-51 “determine whether the quality measure is below a quality threshold, and determine whether a number of available data items comprising at least the training example data meet a specified number of inertia window data items, wherein responsive to determining that the quality measure is below the quality threshold and the number of available data items comprising at least the training example data meets the specified number of inertia window data items, the machine learning model is retrained.”)(teaches threshold quality checking and retraining when below threshold, which corresponds to below a threshold accuracy level).
Niinuma, Terry, Usatov, and Grove are considered analogous to the claimed invention because they both address creating a personalized machine learning model from personalized training data. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Niinuma, Terry, and Usatov to incorporate Grove’s teachings of inference evaluation. The motivation for such a combination would provide the benefit of performance control.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
/AHMED TAHA/Examiner, Art Unit 2613
/XIAO M WU/Supervisory Patent Examiner, Art Unit 2613