Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to amendment filed 08/18/2025 in which claims 1- 11, 13-14 are pending.
Response to Arguments
Applicant’s arguments, see pages 6-11, filed 08/18/2025, with respect to the rejections of claims have been fully considered and amended claims are moot in view of a new grounds of rejection is made in view of Dadras et al. (US 2022/0207348 A1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, 9-14 are rejected under 35 U.S.C. 103 as being unpatentable over Ellenbogen et al. (US 2017/0098172 A1) (IDS provided 01/19/2024) in view of Dadras et al. (US 2022/0207348 A1).
Regarding claim 1, Ellenbogen discloses a method for retraining (para[0007] (para[0150] & Fig. 34 teaches At 3430, a predictive model of a machine computation component can be trained by providing the sensor data as input and the result from the agent computation component as a supervisory signal., Para[0184] teaches a high-confidence output from the agent computation component can be used to train one or more artificial intelligence systems forming the machine computation component. When a high-confidence output is received from the agent computation component, the analysis platform can train an artificial intelligence system using the high-confidence agent computation component output as the supervisory signal and the sensor data as the input signal. System is continuously getting better as it is routinely retrained after the addition of these harder examples to the training set) a video monitoring device (1) (Para[0228] & Fig. 17 teaches the video sensor 1705 can acquire images (for example, of a person), which are analyzed by the image analysis analytics 1710), wherein the video monitoring device (1) is provided with monitoring data (2) (para[0147] & Fig. 34 teaches sensor data received from and/or of a security system asset. An asset can include an imaging device, a video camera), wherein the monitoring data (2) comprises images of a monitored region (para[0011], para[0148] teaches the sensor data can include, for example, an image, video, data generated by any of the above-enumerated assets),
wherein the monitoring data (2) are processed and/or analysed on at least two processing paths (5a,b) (para[0149] –[0150] & Fig. 34 teaches at 3420, requesting processing by, and receiving a result and a confidence measure of the result from, an agent computation component. and machine computation component, At 3430, a predictive model of a machine computation component can be trained by providing the sensor data as input and the result from the agent computation component), wherein the processing and/or analysis of the monitoring data (2) on the processing paths (5a,b) each deliver a path result and an associated reliability (Para[0162] teaches the machine computation component can also determine a confidence measure of its output. The agent computation component can also determine a confidence measure of its output. para[084] teaches when a high-confidence output is received from the agent computation component, the analysis platform can train an artificial intelligence system using the high-confidence agent computation component output as the supervisory signal and the sensor data as the input signal. Thus, the analysis platform can continually improve in performance and require fewer agent computation component queries to perform the same amount of work), wherein at least one of the processing paths (5a,b) forms an Al processing path (para[0141] teaches then the agent can be fed images that can be used to train the artificial intelligence components of the machine computation component , para[0147], [0150] & Fig. 34 teaches a method 3400 of training a machine computation component on the result provided from an agent computation component. Because the currents subject matter enables run-time truth determination by querying human agents via the agent computation component, the current subject matter can train the artificial intelligence components (e.g., as implemented by the machine computation component) to improve their performance on real-world data.), wherein the Al processing path is based on a neural network and is designed to detect object and/or object classification (Para[0150] & Fig. 34 teaches machine computation component can include an artificial intelligence (e.g., machine learning) system that develops and utilizes a predictive model. The machine computation component can include any number of algorithms. In some implementations, the machine computation component can include a deep neural network, a convolutional neural network (CNN), a Faster Region-based CNN (R-CNN), and the like. Para[0176], para[0159], [0162] teaches e.g., pattern match, classification, detection, and the like, para[0180]-[0181] teaches object classifier 1240 looks at the feature maps and each region proposed by the RPN 1230 and classifies each region as one of the objects of interest or not. train the object classifier 1240),
Ellenbogen does not explicitly disclose each deliver a path result and an associated reliability to a difference determination model, wherein a difference between the reliability of the path result of the AI processing path and the reliability of the associated path results of the further processing paths (5a,b) and/or a difference between the path result of the AI processing path and the path results of the further processing paths (5a,b) is determined by the difference determination model, wherein, if a threshold difference is exceeded by the determined difference then the associated path result of the AI processing path is set as a training object for retraining the neural network.. However Dadras discloses each deliver a path result and an associated reliability to a difference determination model, wherein a difference between the reliability of the path result of the Al processing path and the reliability of the associated path results of the further processing paths (5a,b) and/or a difference between the path result of the Al processing path and the path results of the further processing paths (5a,b) is determined (Para[0007] teaches deep neural networks to assist in classifying objects, Para[0058] & Fig. 4 teaches the data annotator 415 compares the friction coefficient labels generated by the friction estimator 410 and the determined friction coefficients from the neural network 405 to determine whether a difference exceeds a label threshold. For example, the data annotator 415 compares the determined friction coefficient corresponding to an image generated to a corresponding friction coefficient label generated by the friction estimator 410. The label threshold is a metric, e.g., an empirically determined metric determined during training of the neural network 405 that represents an allowable difference between the friction coefficient labels generated by the friction estimator 410 and the determined friction coefficients from the neural network 405. The label threshold can be determined by estimating a plurality of friction coefficients using a friction estimator 410 based on similar sensor 115 data and determining a standard deviation for the friction coefficient estimates), wherein, if a threshold difference is exceeded by the determined difference then the associated path result of the Al processing path is set as a training object for retraining the neural network. (Para[0059]-[0063] & Figs. 4-5 teaches if the difference exceeds the label threshold, the data annotator 415 modifies the determined friction coefficients corresponding to the image to be equal to the friction coefficient label such that the image is labeled with the friction coefficient label. Thus, the image corresponds to the friction coefficient label generated by the friction estimator 410 and not the determined friction coefficient generated by the neural network 405. The data annotator 415 can determine which image corresponds to the friction coefficient label using timestamps. For example, the data annotator 415 can match an image to a friction coefficient label when a time stamp of the image is within a predetermined time range of a time stamp of the friction coefficient label. The data annotator 415 provides the image and the friction coefficient label to the neural network 405. After receiving the image and the friction coefficient label, the neural network 405 can enter a training phase in which the neural network 405 is retrained with the image and the friction coefficient label). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of receiving sensor data and classified into one of classes by a machine computation component including a predictive model trained on data labeled by an agent computation component, where the agent computation component includes a platform to query an agent of Ellenbogen with the method involves determining whether a difference between a coefficient of friction marker and a determined coefficient of friction corresponding to an image representing an area is greater than a marker threshold of Dadras in order to provide a system with deep neural networks (DNNs) can be used to perform many image understanding tasks, including classification, segmentation, and captioning
Regarding claim 2, Ellenbogen discloses the method according to claim 1, wherein the video monitoring device (1) is designed and/or provided for routine use in an application environment, wherein processing and/or analysing on the processing paths (5a,b), determining the difference, and/or setting as a training object occurs during routine use (Para[0057] the platform can determine whether a person who is in an area in which they should not be is behaving in a suspicious manner. The platform can use its artificial intelligence component to detect the person and then use the human agent to determine whether that person is behaving suspiciously, and then progressively train the artificial intelligence to begin to recognize suspicious behavior in that environment on its own. Para[0122] teaches the system can provide routines for training agents and evaluating agent accuracy).
Regarding claim 3, Ellenbogen discloses the method according to claim 1, wherein an overall reliability is determined based on the reliabilities of the further processing paths (5a,b), wherein the difference of the reliability of the Al processing path is determined based on the overall reliability (para[0149] teaches the agent computation component can query multiple agents and create a composite output and a composite confidence. The confidence measure can characterize a likelihood that the output of the agent computation component is correct).
Regarding claim 4, Ellenbogen discloses the method according to claim 1, wherein the path result of the AI processing path is set as a training object if the associated reliability is below a minimum reliability (Para[0184] teaches when the confidence measure returned by the machine computation component is low, the image can be sent to an agent who can correct any mistakes in bounding boxes or labeling. Images that have incorrect bounding boxes and/or misclassified labels can be fixed and added to the training set).
Regarding claim 5, Ellenbogen discloses the method according to claim 1, wherein the training object and/or associated image is added to a training data set (9), wherein the neural network of the AI processing path is retrained based on the training data set (9) (para[0053] teaches the analysis platform can include a machine computation component, described more fully below, that can include predictive models built using a machine learning algorithm, for example, a deep learning neural network. Para[0176], Para[0184] teaches when the confidence measure returned by the machine computation component is low, the image can be sent to an agent who can correct any mistakes in bounding boxes or labeling. Images that have incorrect bounding boxes and/or misclassified labels can be fixed and added to the training set. The system is continuously getting better as it is routinely retrained after the addition of these harder examples to the training set).
Regarding claim 9, Ellenbogen discloses the method according to claim to 5, wherein the addition of the training object and/or the training data (11) to the training data set (9) is controlled, verified and/or released by a person (para [0147] FIG. 34 teaches process flow diagram illustrating a method 3400 of training a machine computation component on the result provided from an agent computation component. Because the currents subject matter enables run-time truth determination by querying human agents via the agent computation component, the current subject matter can train the artificial intelligence components (e.g., as implemented by the machine computation component) to improve their performance on real-world data).
Regarding claim 10, Ellenbogen discloses the method according to claim 1, wherein the object detection and/or object classification is based on an image evaluation of the monitoring data (2) (para[0007], [0014] teaches, para[0051], [0054], [0176] teaches object detector 1210 includes a CNN for performing image processing including creating a bounding box around objects in an image and detecting or classifying the objects in the image. The input to the object detector is a digital image and the output is an array of bounding boxes and corresponding class labels. An example input image and an example output is illustrated in FIG. 13. The class labels are: person, car, helmet, and motor cycle).
Regarding claim 11, Ellenbogen discloses the method according to claim 1, wherein the monitoring data (2) comprises sensor data of at least one sensor (4), wherein the sensor (4) forms a radar, infrared, lidar, UV, speed and/or distance sensor (para[0010] teaches the sensor data can be radar imaging device or a proximity sensor).
Regarding claim 13, Ellenbogen discloses a non-transitory, computer-readable storage medium containing instructions that when executed by a computer (para[0242] teaches a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily), cause the computer to retrain a video monitoring device (1) (para[0184] teaches a high-confidence output from the agent computation component can be used to train one or more artificial intelligence systems forming the machine computation component , When a high-confidence output is received from the agent computation component, the analysis platform can train an artificial intelligence system using the high-confidence agent computation component output as the supervisory signal and the sensor data as the input signal. System can continuously getting better as it is routinely retrained after the addition of these harder examples to the training set), wherein the video monitoring device (1) is provided with monitoring data (2) (para[0147] & Fig. 34 teaches sensor data received from and/or of a security system asset. An asset can include an imaging device, a video camera), wherein the monitoring data (2) comprises images of a monitored region (para[0011], para[0148] teaches the sensor data can include, for example, an image, video, data generated by any of the above-enumerated assets),
by processing and/or analysing the monitoring data (2) on at least two processing paths (5a,b) (para[0149] –[0150] & Fig. 34 teaches at 3420, requesting processing by, and receiving a result and a confidence measure of the result from, an agent computation component. and machine computation component, At 3430, a predictive model of a machine computation component can be trained by providing the sensor data as input and the result from the agent computation component), wherein the processing and/or analysis of the monitoring data (2) on the processing paths (5a,b) each deliver a path result and an associated reliability (Fig. 4 & Para[0162] teaches the machine computation component can also determine a confidence measure of its output. The agent computation component can also determine a confidence measure of its output. para[084] teaches when a high-confidence output is received from the agent computation component, the analysis platform can train an artificial intelligence system using the high-confidence agent computation component output as the supervisory signal and the sensor data as the input signal. Thus, the analysis platform can continually improve in performance and require fewer agent computation component queries to perform the same amount of work, Para[0172] teaches at 440, each task can be executed using their respective solution state machine objects such that the processing includes processing by a machine computation component and by an agent computation component. After execution, each task has a result (for example, presence of a person or intrusion is detected), wherein at least one of the processing paths (5a,b) forms an AI processing path (para[0141] teaches then the agent can be fed images that can be used to train the artificial intelligence components of the machine computation component , para[0147], [0150] & Fig. 34 teaches a method 3400 of training a machine computation component on the result provided from an agent computation component. Because the currents subject matter enables run-time truth determination by querying human agents via the agent computation component, the current subject matter can train the artificial intelligence components (e.g., as implemented by the machine computation component) to improve their performance on real-world data), wherein the AI processing path is based on a neural network and is designed to detect object and/or object classification (Para[0150] & Fig. 34 teaches machine computation component can include an artificial intelligence (e.g., machine learning) system that develops and utilizes a predictive model. The machine computation component can include any number of algorithms. In some implementations, the machine computation component can include a deep neural network, a convolutional neural network (CNN), a Faster Region-based CNN (R-CNN), and the like. Para[0176], para[0159], [0162] teaches e.g., pattern match, classification, detection, and the like, para[0180]-[0181] teaches object classifier 1240 looks at the feature maps and each region proposed by the RPN 1230 and classifies each region as one of the objects of interest or not. train the object classifier 1240),
Ellenbogen does not explicitly disclose at least two processing paths (5a,b) in parallel to one another, and by determining a difference between the reliability of the path result of the AI processing path and the reliability of the associated path results of the further processing paths (5a,b) and/or a difference between the path result of the AI processing path and the path results of the further processing paths (5a,b),wherein, if a threshold difference is exceeded by the determined difference then the associated path result of the AI processing path is set as a training object for retraining the neural network. However Dadras discloses at least two processing paths (5a,b) in parallel to one another (Figs. 3-4 & para[0056] teaches neural network 405 determines a friction coefficient of the surface. Para[0057] teaches The friction estimator 410 receives sensor 115 data, 0058] The data annotator 415 receives the friction coefficient labels generated by the friction estimator 410 and the images and corresponding determined friction coefficients from the neural network 405), and by determining a difference between the reliability of the path result of the AI processing path and the reliability of the associated path results of the further processing paths (5a,b) and/or a difference between the path result of the AI processing path and the path results of the further processing paths (5a,b) (Para[0058] & Fig. 4 teaches the data annotator 415 compares the friction coefficient labels generated by the friction estimator 410 and the determined friction coefficients from the neural network 405 to determine whether a difference exceeds a label threshold. For example, the data annotator 415 compares the determined friction coefficient corresponding to an image generated to a corresponding friction coefficient label generated by the friction estimator 410. The label threshold is a metric, e.g., an empirically determined metric determined during training of the neural network 405 that represents an allowable difference between the friction coefficient labels generated by the friction estimator 410 and the determined friction coefficients from the neural network 405. The label threshold can be determined by estimating a plurality of friction coefficients using a friction estimator 410 based on similar sensor 115 data and determining a standard deviation for the friction coefficient estimates), wherein, if a threshold difference is exceeded by the determined difference then the associated path result of the AI processing path is set as a training object for retraining the neural network (Para[0059]-[0063] & Figs. 4-5 teaches if the difference exceeds the label threshold, the data annotator 415 modifies the determined friction coefficients corresponding to the image to be equal to the friction coefficient label such that the image is labeled with the friction coefficient label. Thus, the image corresponds to the friction coefficient label generated by the friction estimator 410 and not the determined friction coefficient generated by the neural network 405. The data annotator 415 can determine which image corresponds to the friction coefficient label using timestamps. For example, the data annotator 415 can match an image to a friction coefficient label when a time stamp of the image is within a predetermined time range of a time stamp of the friction coefficient label. The data annotator 415 provides the image and the friction coefficient label to the neural network 405. After receiving the image and the friction coefficient label, the neural network 405 can enter a training phase in which the neural network 405 is retrained with the image and the friction coefficient label). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of receiving sensor data and classified into one of classes by a machine computation component including a predictive model trained on data labeled by an agent computation component, where the agent computation component includes a platform to query an agent of Ellenbogen with the method involves determining whether a difference between a coefficient of friction marker and a determined coefficient of friction corresponding to an image representing an area is greater than a marker threshold of Dadras in order to provide a system with deep neural networks (DNNs) can be used to perform many image understanding tasks, including classification, segmentation, and captioning
Regarding claim 14, Ellenbogen discloses a video monitoring device (1) , wherein the video monitoring device (1) is provided with monitoring data (2) (Para[0228] & Fig. 17 teaches the video sensor 1705 can acquire images (for example, of a person), which are analyzed by the image analysis analytics 1710)), wherein the monitoring data (2) comprise images of a monitored region (para[0147] & Fig. 34 teaches sensor data received from and/or of a security system asset. An asset can include an imaging device, a video camera), with an analysis module, wherein the analysis module comprises and forms at least two processing paths (5a, b) (para[0149] –[0150] & Fig. 34 teaches at 3420, requesting processing by, and receiving a result and a confidence measure of the result from, an agent computation component. and machine computation component, At 3430, a predictive model of a machine computation component can be trained by providing the sensor data as input and the result from the agent computation component) to process and/or analyse the monitoring data (2) on at least two processing paths (5a,b), wherein the processing and/or analysis of the monitoring data (2) on the processing paths (5a,b) each deliver a path result and an associated reliability measure (Fig. 4 & Para[0172] teaches at 440, each task can be executed using their respective solution state machine objects such that the processing includes processing by a machine computation component and by an agent computation component. After execution, each task has a result (for example, presence of a person or intrusion is detected , para[0149] & Fig. 34 teaches At 3420, requesting processing by, and receiving a result and a confidence measure of the result from, an agent computation component, Para[0162] teaches the machine computation component can also determine a confidence measure of its output. The agent computation component can also determine a confidence measure of its output. para[084] teaches when a high-confidence output is received from the agent computation component, the analysis platform can train an artificial intelligence system using the high-confidence agent computation component output as the supervisory signal and the sensor data as the input signal. Thus, the analysis platform can continually improve in performance and require fewer agent computation component queries to perform the same amount of work), wherein at least one of the processing paths (5a,b) forms an AI processing path (para[0141] teaches then the agent can be fed images that can be used to train the artificial intelligence components of the machine computation component , para[0147], [0150] & Fig. 34 teaches a method 3400 of training a machine computation component on the result provided from an agent computation component. Because the currents subject matter enables run-time truth determination by querying human agents via the agent computation component, the current subject matter can train the artificial intelligence components (e.g., as implemented by the machine computation component) to improve their performance on real-world data), wherein the AI processing path is based on a neural network and is designed for object detection and/or object classification (Para[0150] & Fig. 34 teaches machine computation component can include an artificial intelligence (e.g., machine learning) system that develops and utilizes a predictive model. The machine computation component can include any number of algorithms. In some implementations, the machine computation component can include a deep neural network, a convolutional neural network (CNN), a Faster Region-based CNN (R-CNN), and the like. Para[0176], para[0159], [0162] teaches e.g., pattern match, classification, detection, and the like, para[0180]-[0181] teaches object classifier 1240 looks at the feature maps and each region proposed by the RPN 1230 and classifies each region as one of the objects of interest or not. train the object classifier 1240).
Ellenbogen does not explicitly disclose at least two processing paths (5a,b) in parallel and simultaneously wherein the analysis module is configured to determine a difference between the reliability of the path result of the AI processing path and the reliability of the associated path results of the further processing paths (5a,b) and/or a difference between the path result of the AI processing path and the path results of the further processing paths (5a,b), wherein the analysis module is configured, if a threshold difference is exceeded by the determined difference to set the associated path result of the AI processing path as the training object for retraining the neural network. However Dadras discloses at least two processing paths (5a,b) in parallel and simultaneously (Figs. 3-4 & para[0056] teaches neural network 405 determines a friction coefficient of the surface. Para[0057] teaches The friction estimator 410 receives sensor 115 data, 0058] The data annotator 415 receives the friction coefficient labels generated by the friction estimator 410 and the images and corresponding determined friction coefficients from the neural network 405); wherein the analysis module is configured to determine a difference between the reliability of the path result of the AI processing path and the reliability of the associated path results of the further processing paths (5a,b) and/or a difference between the path result of the AI processing path and the path results of the further processing paths (5a,b), wherein the analysis module is configured (Para[0058] & Fig. 4 teaches the data annotator 415 compares the friction coefficient labels generated by the friction estimator 410 and the determined friction coefficients from the neural network 405 to determine whether a difference exceeds a label threshold. For example, the data annotator 415 compares the determined friction coefficient corresponding to an image generated to a corresponding friction coefficient label generated by the friction estimator 410. The label threshold is a metric, e.g., an empirically determined metric determined during training of the neural network 405 that represents an allowable difference between the friction coefficient labels generated by the friction estimator 410 and the determined friction coefficients from the neural network 405. The label threshold can be determined by estimating a plurality of friction coefficients using a friction estimator 410 based on similar sensor 115 data and determining a standard deviation for the friction coefficient estimates)), if a threshold difference is exceeded by the determined difference to set the associated path result of the AI processing path as the training object for retraining the neural network (Para[0059] –[0063] & Figs. 4-5 teaches if the difference exceeds the label threshold, the data annotator 415 modifies the determined friction coefficients corresponding to the image to be equal to the friction coefficient label such that the image is labeled with the friction coefficient label. Thus, the image corresponds to the friction coefficient label generated by the friction estimator 410 and not the determined friction coefficient generated by the neural network 405. The data annotator 415 can determine which image corresponds to the friction coefficient label using timestamps. For example, the data annotator 415 can match an image to a friction coefficient label when a time stamp of the image is within a predetermined time range of a time stamp of the friction coefficient label. The data annotator 415 provides the image and the friction coefficient label to the neural network 405. After receiving the image and the friction coefficient label, the neural network 405 can enter a training phase in which the neural network 405 is retrained with the image and the friction coefficient label.). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of receiving sensor data and classified into one of classes by a machine computation component including a predictive model trained on data labeled by an agent computation component, where the agent computation component includes a platform to query an agent of Ellenbogen with the method involves determining whether a difference between a coefficient of friction marker and a determined coefficient of friction corresponding to an image representing an area is greater than a marker threshold of Dadras in order to provide a system with deep neural networks (DNNs) can be used to perform many image understanding tasks, including classification, segmentation, and captioning
Claims 6 is rejected under 35 U.S.C. 103 as being unpatentable over Ellenbogen et al. (US 2017/0098172 A1) (IDS provided 01/19/2024) in view of Dadras et al. (US 2022/0207348 A1) and Packwood et al. (US 2023/0177391 A1)
Regarding claim 6, Ellenbogen in view of Dadras discloses the method according to claim 5, Ellenbogen in view of Dadras does not explicitly disclose, characterized in that wherein the training object is added to the training data set (9) as a positive or negative example. However Packwood discloses, characterized in that wherein the training object is added to the training data set (9) as a positive or negative example ( para[0117] teaches where the model is doing a two-way classification (spill/no-spill) and the accumulation of annotated false positive and false negative images is generating additional images of the same two classes (new spill images and new no-spill images), then transfer learning, i.e. the second option, is more applicable as an approach and gives a better trade-off of accuracy against training time and resource.). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of receiving sensor data and classified into one of classes by a machine computation component including a predictive model trained on data labeled by an agent computation component, where the agent computation component includes a platform to query an agent of Ellenbogen in view of Dadras with the method of automatically collects and annotates the image data of the new products based on event triggers of Packwood in order to provide a system in which creates a data management pipeline to apply automated logic to the image capture and annotation process, and brings the resulting annotated image data back to some central location where it can be processed through quality assurance (QA) workflows to check its suitability for use in model re-training, and then feeds that data into model training and validation work-flows
Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Ellenbogen et al. (US 2017/0098172 A1) (IDS provided ) in view of Dadras et al. (US 2022/0207348 A1) and Turkelson et al. (US 2021/0004589 A1).
Regarding claim 7, Ellenbogen in view of Dadras discloses the method according to claim 5, Ellenbogen in view of Dadras does not explicitly disclose wherein, training data (11) is searched for in training databases (10) based on the training object, wherein the training data (11) comprises at least one image comprising an object similar to the training object, wherein the training data (11) found is added to the training data set (9) . However Turkelson discloses wherein, training data (11) is searched for in training databases (10) based on the training object (Para[0239] teaches to determine the objects, image-capture task subsystem 112C may access training data database 138C), wherein the training data (11) comprises at least one image comprising an object similar to the training object, wherein the training data (11) found is added to the training data set (9) (Para[0153] teaches a similarity measure between the first image and the newly added image may be computed and, if the similarity satisfies a threshold similarity, the first image may be added to the training data set. Similarly, this process may iteratively scan previously obtained images to determine whether any are “similar” to the newly added image). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of receiving sensor data and classified into one of classes by a machine computation component including a predictive model trained on data labeled by an agent computation component and collects and annotates the image data of the new products based on event triggers of Ellenbogen in view of Dadras with the method object recognition model trained using a training data set is obtained with images depicting objects of Turkelson in order to provide a system in which facilitate the performance of a visual search to identify the object depicted by the captured image.
Regarding claim 8, Turkelson further discloses the method according to claim 5, wherein, artificial training data (11) is created based on the training object, wherein the artificial training data (11) comprises at least one image comprising an object similar to the training object, wherein the training data (11) found is added to the training data set (9) (Para[0153] teaches a similarity measure between the first image and the newly added image may be computed and, if the similarity satisfies a threshold similarity, the first image may be added to the training data set. Similarly, this process may iteratively scan previously obtained images to determine whether any are “similar” to the newly added image). Motivation to combine as indicted in claim 7.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROWINA J CATTUNGAL whose telephone number is (571)270-5922. The examiner can normally be reached Monday-Thursday 7:30am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571) 272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROWINA J CATTUNGAL/Primary Examiner, Art Unit 2425