DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed 14 January 2026 [hereinafter Response], which has been entered, where:
Claims 2, 3, 11-13, 17, and 20 have been cancelled.
New claim 23 is presented for examination.
Claims 1, 4-10, 14-16, 18, 19, and 21-23 are pending.
Claims 1, 4-10, 14-16, 18, 19, and 21-23 are rejected.
Claim Rejections - 35 U.S.C. § 101
3. 35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
4. Claims 1, 4-10, 14-16, 18, 19, 21-23 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 recites an “Information Handling System,” which is a machine, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101). However, under Step 2A Prong One, the claim recites the limitations of “detect an observation overlap between two or more devices by determining a focal length and a pose of each of the two or more devices,” “identify a consensus between Artificial Intelligence (AI) or Machine Intelligence (ML) model inferences made based upon data received by the two or more devices by applying a weight to an AI/ML model inference based, at least in part, upon a confidence of the AI/ML model inference,” “modify the confidence based upon drift detected during operation of an AI/ML model used to make the AI/ML model inference,” and “in response to the identification, tag at least a subset of the data with a ground truth label.” These activities of “detect,” “identify” “applying a weight,” “modify the confidence,” and “tag” are a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. The claim recites more details or specifics to the abstract idea of "modify the confidence," where "in response to a determination that the drift is greater than a threshold value, reduce the confidence proportionally to the drift," which is merely more specific to the abstract idea. Thus, claim 1 recites an abstract idea.
Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include a “processor,” “a memory coupled to the processor, the memory having program instructions stored thereon,” and “two or more devices comprises an optical camera,” which are generic computer components used to implement the abstract idea, and according, do not serve to integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). Also, the additional element of “two or more devices” is generally linking the abstract idea to a field of use (that is, specifying an environment for the abstract idea), that does not integrate the abstract idea into a practical application. (MPEP § 2016.05(h). Therefore, claim 1 is directed to the abstract idea.
Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements recited in the claim beyond the identified judicial exception include a “processor,” and “a memory coupled to the processor, the memory having program instructions stored thereon,” and “two or more devices comprises an optical camera,” which are generic computer components used to implement the abstract idea, and accordingly, do not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). Also, the additional element of “two or more devices” is generally linking the abstract idea to a field of use (that is, specifying an environment for the abstract idea), that does not amount to significantly more than the abstract idea. (MPEP § 2016.05(h). Therefore, claim 1 is subject-matter ineligible.
Claims 4 and 5 depend directly or indirectly from claim 1. The claims recite more details or specifics to the additional elements of “two or more devices,” (claim 4: “wherein at least a subset of the two or more devices comprises instances of the identical hardware”; claim 5: “wherein at least a subset of the two or more devices comprises different hardware”), and accordingly, are merely more specific to the additional element. The abstract idea of these claims are not integrated into a practical application, (see MPEP § 2106.05(g)), nor do they amount to significantly more than the abstract idea, (MPEP § 2106.05(d)), because the claims recite no more than the abstract idea. Therefore, claims 4 and 5 are subject-matter ineligible.
Claims 6 and 7 depend directly or indirectly from claim 1. The claims recite more details or specifics to the abstract idea of “identify a consensus,” (claim 6: “wherein at least a subset of the AI/ML model inferences is made by distinct instances of the identical AI/ML model”; claim 7: “wherein at least a subset of the AI/ML model inferences is made by different types of AI/ML models”), and accordingly, are merely more specific to the abstract idea. Further, the claims recite additional elements of the “distinct instances of the same AI/ML model,” and “different types of AI/ML models,” which are recited at a high level of generality, and accordingly, are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application, nor amount to significantly more than the abstract idea. Therefore, claims 6 and 7 are subject-matter ineligible.
Claims 8-10 depend directly or indirectly from claim 1. The claims provide more details or specifics to the abstract idea of “identify a consensus,” (claim 8: wherein each of the AI/ML model inferences comprises detection of at least one of: an object or an image”; claim 9: “wherein to identify the consensus, . . . apply a weight to an AI/ML model inference based, at least in part, upon a hardware characteristic of a sensor employed to capture data used to make the AI/ML model inference”; claim 10: “wherein to identify the consensus . . . further cause the IHS to apply a weight to an AI/ML model inference based, at least in part, upon an observational quality of an optical camera employed to capture data used to make the AI/ML model inference”), which merely provide more details to the abstract idea. Claims 9 and 10 also recite the additional element of “an optical camera employed to capture data,” which are generic computer components used to implement the abstract ideas, (MPEP § 2106.05(f)), and do not serve to integrate the abstract idea into a practical application, nor amount to significantly more than the abstract idea. Therefore, claims 8-10 are subject-matter ineligible.
Claim 14 depends directly or indirectly from claim 1. The claim recites the limitations of “select the subset of the data using the tag,” in which “select” is a mental process, (MPEP § 2106.04(a)(2) sub III), and is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The claim also recites the additional element of an “AI model,” which is recited at a high level of generality, and accordingly, is a generic computer component, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application, nor amounts to significantly more than an abstract idea. Also, the activity of “re-train the AI/ML model with the subset of the data,” is the use of the generic computer component (AI model) to implement the abstract idea, (MPEP § 2106.05(f)) that does not serve to integrate the abstract idea into a practical application, nor amounts to significantly more than the abstract idea. Therefore, claim 14 is subject-matter ineligible.
Claim 15 recites a “hardware memory device,” which is a product, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101).
However, under Step 2A Prong One, the claim recites the limitations of “detect an observation overlap between two or more devices by determining a focal length and a pose of each of the two or more devices,” “in response to the observation overlap . . . , identify a consensus between Artificial Intelligence (AI) or Machine Intelligence (ML) model inferences made based upon data received by the two or more devices by applying a weight to an AI/ML model inference based, at least in part, upon a confidence of the AI/ML model inference,” “modify the confidence based upon drift detected during operation of an AI/ML model used to make the AI/ML model inference,” and “in response to the consensus, characterize the data as reference data.” These activities of “detect,” “identify” “applying a weight,” “modify the confidence,” and “characterize” are a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. The claim recites more details or specifics to the abstract idea of "modify the confidence," where "in response to a determination that the drift is greater than a threshold value, reduce the confidence proportionally to the drift," which is merely more specific to the abstract idea. Thus, claim 15 recites an abstract idea.
Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include “a hardware memory device,” an “Information Handling System (IHS),” and a “plurality of devices,” which are generic computer components used to implement the abstract idea, and accordingly, do not serve to integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). Also, the additional element of a “plurality of devices comprises an optical camera” is generally linking the abstract idea to a field of use (that is, specifying an environment for the abstract idea), that does not integrate the abstract idea into a practical application. (MPEP § 2016.05(h). The claim also recites the additional element of an “AI/ML model,” which is recited at a high level of generality, and accordingly, is a generic computer component, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. Also, the activity of “re-train an AI/ML model with the subset of the data,” is the use of the generic computer component (AI/ML model) to implement the abstract idea, (MPEP § 2106.05(f)) that does not serve to integrate the abstract idea into a practical application. Therefore, claim 15 is directed to the abstract idea.
Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements recited in the claim beyond the identified judicial exception include “a hardware memory device,” an “Information Handling System (IHS),” and a “plurality of devices comprises an optical camera,” which are generic computer components used to implement the abstract idea, and accordingly, do not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). Also, the additional element of a “plurality of devices” is generally linking the abstract idea to a field of use (that is, specifying an environment for the abstract idea), that does not amount to significantly more than the abstract idea. (MPEP § 2016.05(h). The claim also recites the additional element of an “AI/ML model,” which is recited at a high level of generality, and accordingly, is a generic computer component, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. Also, the activity of “re-train an AI/ML model with the subset of the data,” is the use of the generic computer component (AI/ML model) to implement the abstract idea, (MPEP § 2106.05(f)) that does not amount to significantly more than the abstract idea. Therefore, claim 15 is subject-matter ineligible.
Claim 16 depends from claim 15. The claim provides more details or specifics to the additional element of the “plurality of devices,” where “at least a subset of the plurality of devices comprises different hardware,” and accordingly, is merely more specific to the additional element. The claim also recites “wherein at least a subset of the AI/ML model inferences is made by different types of AI/ML models.” The “different types of AI/ML models” are recited at such a high level of generality that these a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application, nor amounts to significantly more than the abstract idea. Therefore, claim 16 is subject-matter ineligible.
Claim 18 recites a “method,” which is a process, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101). However, under Step 2A Prong One, the claim recites the limitations of “detecting an observation overlap between two or more devices by determining a focal length and a pose of each of the two or more devices,” “identifying a consensus between Artificial Intelligence (AI) or Machine Intelligence (ML) model inferences made based upon data received by the two or more devices by applying a weight to an AI/ML model inference based, at least in part, upon a confidence of the AI/ML model inference,” and “modifying the confidence based upon drift detected during operation of an AI/ML model used to make the AI/ML model inference,” These activities of “detect,” “identifying” “applying a weight,” and “modifying” are a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. The claim recites more details or specifics to the abstract idea of "modifying the confidence," where "in response to a determination that the drift is greater than a threshold value, reducing the confidence proportionally to the drift," which is merely more specific to the abstract idea. Thus, claim 18 recites an abstract idea.
Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include a “plurality of devices,” which is a generic computer component used to implement the abstract idea, and accordingly, does not serve to integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). Also, the additional element of a “plurality of devices comprises an optical camera” is generally linking the abstract idea to a field of use (that is, specifying an environment for the abstract idea), that does not integrate the abstract idea into a practical application. (MPEP § 2016.05(h). The claim also recites the additional element of an “AI/ML model,” which is recited at a high level of generality, and accordingly, is a generic computer component, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. Also, the activity of “re-training an AI/ML model using the data to improve inference confidence scoring and mitigate drift,” is the use of the generic computer component (AI/ML model) to implement the abstract idea, (MPEP § 2106.05(f)) that does not serve to integrate the abstract idea into a practical application. Therefore, claim 18 is directed to the abstract idea.
Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements recited in the claim beyond the identified judicial exception include a “plurality of devices comprises an optical camera,” which is a generic computer component used to implement the abstract idea, and accordingly, does not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). Also, the additional element of a “plurality of devices comprises an optical camera” is generally linking the abstract idea to a field of use (that is, specifying an environment for the abstract idea), that does not amount to significantly more than the abstract idea. (MPEP § 2016.05(h). The claim also recites the additional element of an “AI/ML model,” which is recited at a high level of generality, and accordingly, is a generic computer component, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. Also, the activity of “re-training an AI/ML model using the data to improve inference confidence scoring and mitigate drift,” is the use of the generic computer component (AI/ML model) to implement the abstract idea, (MPEP § 2106.05(f)) that does not amount to significantly more than the abstract idea. Therefore, claim 18 is subject-matter ineligible.
Claim 19 depends from claim 18. The claim provides more details or specifics to the additional element of the “plurality of devices,” where “at least a subset of the plurality of devices comprises different hardware,” and accordingly, is merely more specific to the additional element. The claim also recites “wherein at least a subset of the AI/ML model inferences is made by different types of AI/ML models.” The “different types of AI/ML models” are recited at such a high level of generality that these a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application, nor amounts to significantly more than the abstract idea. Therefore, claim 19 is subject-matter ineligible.
Claim 21 depends directly or indirectly from claim 1. The claim recites the additional element of an “AI/ML characterization model,” which is recited at a high-level of generality, and accordingly, is a generic computer component used in the common and expected manner to implement that abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application, nor provides substantially more than the abstract idea. Therefore, claim 21 is subject-matter ineligible.
Claims 22 and 23 depend from claim 21. The claim recites more details or specifics of the additional element of an “AI/ML characterization model,” in that (claim 22: “the AI/ML characterization model comprises a weather condition or a lighting condition,” and claim 23: wherein the AI/ML characterization model comprises a weather condition”) and accordingly, is merely more specific to the abstract idea. Therefore, claim 22 is subject-matter ineligible.
Claim Rejections – 35 U.S.C. § 103
5. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
7. This application currently names joint inventors. In considering patentability of the claims the Examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the Examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention.
8. Claims 1, 4-8, 14-16, 18, and 19 are rejected under 35 U.S.C. § 103 as being unpatentable over US Patent 11276023 to Butler et al. [hereinafter Butler] in view of US Published Application 20190164310 to Noble et al. [hereinafter Noble], and US Published Application 20200012900 to Walters et al. [hereinafter Walters].
Regarding claim 1, Butler teaches [a]n Information Handling System (IHS) (Butler 3:49 teaches a system 102), comprising:
a processor (Butler 10:42-43 teaches a “processing element 604 comprises at least one processor”); and
a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution (Butler 17:37-41 teaches “any logic or application described herein that comprises software or code can be embodied in any non-transitory computer-readable medium or memory for use by or in connection with an instruction execution system such as a processing component in a computer system”), cause the IHS to:
detect an observation overlap between two or more devices (Butler, Fig. 1, teaches “components of a fraud detection system 102 [Examiner annotations in dashed-line text boxes]:”
PNG
media_image1.png
610
867
media_image1.png
Greyscale
Butler 3:59-64 teaches “fraud detection system 102 is implemented by computing devices of online retailer 114 and/or is provided as a service by another computing device accessible by online retailer 114 over network 104 [(that is, two or more devices)]. Fraud detection system 102 is effective to evaluate a particular transaction”; further Butler 4:49-54 teaches “[t]ransaction data 270 (e.g., xt), is a vector describing a particular transaction. In various examples, transaction data 270 may describe a purchase amount, a time at which the transaction occurred, a delivery address, a quantity of the item purchased, etc. [(that is, a “particular transaction” occurs at a “particular time,” which detect an observation overlap between two or more devices)]; still further, Butler, fig. 6, teaches an example architecture including multiple camera sensors [Examiner annotations are in dashed-line text boxes]:
PNG
media_image2.png
592
599
media_image2.png
Greyscale
Butler 11:61 to 12:2 teaches ” one or more sensors 630 such as, for example, one or more position sensors, image sensors, and/or motion sensors. An image sensor 632 is shown in FIG. 6. Some examples of the architecture 600 may include multiple image sensors 632. For example, a panoramic camera system may comprise multiple image sensors 632 [(that is, between two or more devices)] resulting in multiple images and/or video frames that may be stitched and may be blended to form a seamless panoramic output [(that is, detect an observation overlap)]”) . . . ;
identify a consensus between Artificial Intelligence (AI) or Machine Intelligence (ML) model inferences made based upon data received by the two or more devices (Butler, Figure 2A, teaches prediction models [Examiner annotations in dashed-line text boxes]:
PNG
media_image3.png
801
769
media_image3.png
Greyscale
Butler 4:61-66 teaches “[t]he confidence scores of each of prediction models 202a, 202b, . . . , 202n are sent to a combiner 240 ( e.g., a sigmoid function) that may normalize and/or combine the scores to generate a weighted average (prediction 280). The weighted 65 average may represent a consensus of the prediction models 202a, 202b, . . . , 202n [(that is, identify a consensus between Artificial Intelligence (AI) or Machine Intelligence (ML) model inferences made based upon data received by the two or more devices)]”) by applying a weight to an AI/ML model inference (Butler 4:28-22 teaches “instructions are effective to program fraud detection system 102 to find an optimal set of weights for a given set of constraints (action 150) for the various machine learning models of fraud detection system 102 [(that is, “find an optimal set of weights” is by applying a weight to an IA/ML model inference)]”) based, at least in part, upon a confidence of the AI/ML model inference (Butler 4:56-66 teaches “[e]ach of prediction models 202a, 202b, . . . , 202n can receive the same transaction data 270 as input and outputs a confidence score indicating a confidence that the transaction data 270 represents ( or does not represent, depending on the implementation) fraud. The confidence scores of each of prediction models 202a, 202b, . . . , 202n are sent to a combiner 240 ( e.g., a sigmoid function) that may normalize and/or combine the scores to generate a weighted average (prediction 280). The weighted average may represent a consensus of the prediction models 202a, 202b, . . . , 202n [(that is, based, at least in part, upon a confidence of the AI/ML model inference)]”);
* * *
and in response to the identification (Butler 2:54-57 teaches “use of a Kalman filter in the adversarial fraud prevention context allows for incremental training of machine learning models as new ground truth data becomes available ”), tag at least a subset of the data with a ground truth label (Butler 5:13-19 teaches “[l]abeled ground truth data (e.g., historical transaction data that is labeled as ‘fraudulent’ or ‘non-fraudulent’ [(that is, to “label” is to tag at least a subset of the data with a ground truth label)] and the associated prediction value by fraud detection system 102) may be received at any desired cadence. For example, labeled ground truth data [(that is, “labeled ground truth data” is in response to the identification, tag at least a subset of the data with a ground truth label)]may be received on a daily basis and/or as such data is received (from a credit card company, for example)”).
Though Butler teaches multiple position sensors, image sensors, and motion sensors, where a panoramic camera system may comprise multiple image sensors to provide a panoramic output in which image geometry information can be captured, Butler, however, does not explicitly teach –
* * *
[detect an observation overlap] . . . by determining a focal length and a pose of each of the two or more devices, wherein each of the two or more devices comprises an optical camera;
* * *
But Noble teaches –
* * *
[detect an observation overlap] . . . by determining a focal length and a pose of each of the two or more devices (Noble, Fig. 3, teaches a stereoscopic camera 330 [Examiner annotations in dashed-line text boxes]:
PNG
media_image4.png
891
800
media_image4.png
Greyscale
Nobel ¶ 0075 teaches “[t]he cameras must also be positioned so as to be capturing overlapping fields of view. Calibration of the scene can be initially performed by placing an object having a predefined calibration pattern in the scene to be imaged. Upon capturing stereo images [(that is, detect an observation overlap)], the calibration pattern can be used to identify corresponding pixels of the cameras and also to identify other parameters of the simulated 3D scene, such as the rotation and shift in three dimensions between the cameras [(that is, “rotation and shift” for the cameras is determining . . . a pose of each of the two or more devices)], focal lengths [(that is, determining a focal length . . . of each of the two or more devices)], distortion etc. Using these parameters, the three dimensional coordinates of objects can be calculated within the scene [(that is, detecting an observation overlap] . . . by determining a focal length and a pose of each of the two or more devices)]”);
* * *
Butler and Noble are from the same or similar field of endeavor. Butler teaches prediction models can receive transaction data and output a confidence score relating to the transaction data. Noble teaches registering a position and an orientation of one or more cameras in a camera imaging system.
Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify Butler pertaining to model prediction confidence based on transaction data with the stereoscopic cameras each having focal lengths and orientations of Noble.
The motivation to do so is because, with “monitoring and surveillance systems, it is often necessary to monitor a scene from different perspectives. This is typically achieved by positioning multiple cameras at different positions and orientations throughout the scene. In some applications , such as vehicle and driver monitoring systems, it is advantageous to be able to track and map the positions of objects from the field of view of one camera to another. In these applications , it is necessary to know the relative positions and orientations of each camera so that an accurate mapping or projection of the object position between each camera view can be performed.” (Noble ¶ 0004).
Though Butler and Noble teach the features of determining ground truths based on image data for machine learning applications, the combination of Butler and Noble, however, does not explicitly teach -
* * *
modify the confidence based upon drift detected during operation of an AI/ML model used to make the AI/ML model inference;
in response to a determination that the drift is greater than a threshold value, reduce the confidence proportionally to the drift: and
* * *
But Walters teaches -
* * *
modify the confidence based upon drift detected (Walters ¶ 0185 teaches “the model may be corrected (updated) based on detected drift. Correcting the model may include model training and/or hyperparameter tuning, consistent with disclosed embodiments. Correcting the model may be involve model training or hyperparameter tuning using the received event data and/or other data”) during operation of an AI/ML model used to make the AI/ML model inference (Walters ¶ 0010 teaches “detecting data drift based on the predicted data and the event data [(that is, “the predicted data” is during operation of the AI/ML model)]”; Walters ¶ 0182 teaches “the duration and/or periodicity of the schedule may be based on one or more aspects of the disclosed embodiments, such as a characteristic of the training data, predicted data, or event data (e.g., variance, sampling rate, detecting a measured value falls inside or outside a particular range or above or below particular threshold, etc.), detecting a correction in the predictive model, or other such feature of the disclosed embodiments”;
[Examiner notes that to correct the predictive model is due to modify the confidence based upon drift detected)];
in response to a determination that the drift is greater than a threshold value (Walters ¶ 0182 teaches “the duration and/or periodicity of the schedule may be based on one or more aspects of the disclosed embodiments, such as a characteristic of the training data, predicted data, or event data (e.g., variance, sampling rate, detecting a measured value [(that is, confidence)] falls inside or outside a particular range or above or below particular threshold, etc.), detecting a correction in the predictive model, or other such feature of the disclosed embodiments”; Walters ¶ 0184 teaches “detecting data drift is a based on a comparison of predicted data to event data to determine a difference between predicted data and event data”; Walters ¶ 0184 teaches that “detecting data drift at step 1812 may be based on at least one of a least squares error method, a regression method, a correlation method, or other known statistical method. In some embodiments, the difference is determined using at least one of a Mean Absolute Error, a Root Mean Squared Error, a percent good classification, or the like. In some embodiments, detecting a difference between predicted data and event data includes determining whether a difference between generated data and event data meets or exceeds a threshold difference. In some embodiments, detecting data drift includes determining a difference between the data profile of the predicted data and the data profile of the event data. For example, drift may be detected based on a difference between the covariance matrix of the predicted data and a covariance matrix of the event data”), reduce the confidence proportionally to the drift (Walters ¶ 0202 teaches “a service to provide a model to a remote device, detect data drift in a stored version of the provided model, and notify the remote device that the provided model should be updated; see also Walters ¶ 0185, which teaches “the model may be corrected (updated) based on detected drift. Correcting the model may include model training and/or hyperparameter tuning, consistent with disclosed embodiments. Correcting the model may be involve model training or hyperparameter tuning using the received event data and/or other data [(that is, reducing the confidence proportionally to the drift)]); and
* * *
Butler, Noble, and Walters are from the same or similar field of endeavor. Butler teaches prediction models can receive transaction data and output a confidence score relating to the transaction data. Noble teaches registering a position and an orientation of one or more cameras in a camera imaging system. Walters teaches to efficiently identify data drift early, before spending money, resources, or time on an obsolete model.
Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify the combination of Butler and Noble pertaining to model prediction confidence based on transaction data from multiple cameras with drift detection leading to model obsolescence of Walters.
The motivation to do so is to “provide advantages over the conventional approaches . . . by detecting data drift before models fail, become obsolete, or need to be retrained.” (Walters ¶ 0009).
Examiner notes that the Applicant’s preamble does not afford patentable weight to the Applicant’s claims because the claim preamble is not “necessary to give life, meaning, and vitality” to the claim. Moreover, because the Applicant’s preamble merely states the purpose or intended use of the invention rather than any distinct definition of any of the claimed invention’s limitations, the preamble is not considered a limitation and is of no significance to claim construction.
Regarding claim 4, the combination of Butler, Noble, and Walters teaches all of the limitations of claim 1, as described above in detail.
Butler teaches -
wherein at least a subset of the two or more devices comprises instances of the identical hardware (Butler 3:37-40 teaches “[i]n the presence of adversarial fraud attacks, online learning becomes more powerful due to its advantages in incorporating emerging data patterns in real time [(that is, instances)] rather than based on historical data [(that is, “data patterns in real time” is at least a subset of the two or more devices comprises instances of the identical hardware)]”).
Regarding claim 5, the combination of Butler, Noble, and Walters teaches all of the limitations of claim 1, as described above in detail.
Butler teaches -
wherein at least a subset of the two or more devices comprises different hardware (Butler, Fig. 6, teaches an example architecture of a computing device [Examiner annotations in dashed-line text boxes]:”
PNG
media_image5.png
596
605
media_image5.png
Greyscale
Butler 10:33-40 teaches “[i]t will be appreciated that not all devices will include all of the components of the architecture 600 and some user devices [112] may include additional components not shown in the architecture 600 [(that is, wherein at least a subset of the two or more devices comprises different hardware)]”).
Regarding claim 6, the combination of Butler, Noble, and Walters teaches all of the limitations of claim 1, as described above in detail.
Butler teaches -
wherein at least a subset of the AI/ML model inferences is made by distinct instances of an identical AI/ML model (Butler 2:27-37 teaches “updating the weights (or sets of weights) of machine learning models is referred to as determining updated states of the machine learning models [(that is, a “state” is a distinct instance of the identical AI/ML model)]. Additionally, the automatic determination of machine learning model weights may take into account the recentness of ground truth data [(that is, wherein at least a subset of the AI/ML model inferences is made by distinct instances of the same AI/ML model)]”).
Regarding claim 7, the combination of Butler, Noble, and Walters teaches all of the limitations of claim 1, as described above in detail.
Butler teaches -
wherein at least a subset of the AI/ML model inferences is made by different types of AI/ML models (Butler 4:44-49 teaches “[e]ach of prediction models 202 a, 202 b, . . . , 202 n may use a different machine learning algorithm (e.g., logistic regression, random forest, a neural network, etc. [(that is, different types of AI/ML models)]) in order to use a plurality of different classification methods to generate a prediction as to whether a particular purchase is fraudulent or legitimate [(that is, wherein at least a subset of the AI/ML model inferences is made by different types of AI/ML models)]”).
Regarding claim 8, the combination of Butler, Noble, and Walters teaches all of the limitations of claim 1, as described above in detail.
Noble teaches -
wherein each of the AI/ML model inferences comprises detection of at least one of: an object or an image (Noble ¶ 0079 teaches “the geometric appearance of one or more of the known features is identified within the three dimensional image. This may occur through pattern matching, shape recognition or the like. Finally, at stage 603, the three dimensional position and orientation of the camera relative to the known features is determined from the geometric appearance [(that is, “geometric appearance” is detection of at least one of: an object)]”).
Regarding claim 14, the combination of Butler, Noble, and Walters teaches all of the limitations of claim 1, as described above in detail.
Butler teaches -
wherein the program instructions, upon execution, further cause the IHS to:
select the subset of the data using the tag (Butler 5:17-19 teaches “labeled ground truth data [(that is, data using the tag)] may be received on a daily basis and/or as such data is received (from a credit card company, for example)”; Butler 2:29-32 teaches “updated states of the machine learning models. Additionally, the automatic determination of machine learning model weights may take into account the recentness of ground truth data [(that is, “recentness” is to select the subset of the data using the tag)]”); and
re-train an AI model with the subset of the data (Butler 3:31-35 teaches “various machine learning models described herein may be automatically updated [(that is, re-train an AI model)] in a dynamic fashion based on the labeled training data (e.g., labeled ground truth data), as such training data becomes available [(that is, re-train an AI model with the subset of the data)]”).
Regarding claim 15, Butler teaches [a] hardware memory device having program instructions stored thereon that, upon execution, cause an Information Handling System (IHS) (Butler 17:37-41 teaches “any logic or application described herein that comprises software or code can be embodied in any non-transitory computer-readable medium or memory for use by or in connection with an instruction execution system such as a processing component in a computer system”) to:
detect an observation overlap between a plurality of devices (Butler, Fig. 1, teaches “components of a fraud detection system 102 [Examiner annotations in dashed-line text boxes]:”
PNG
media_image1.png
610
867
media_image1.png
Greyscale
Butler 3:59-64 teaches “fraud detection system 102 is implemented by computing devices of online retailer 114 and/or is provided as a service by another computing device accessible by online retailer 114 over network 104 [(that is, two or more devices)]. Fraud detection system 102 is effective to evaluate a particular transaction”; further Butler 4:49-54 teaches “[t]ransaction data 270 (e.g., xt), is a vector describing a particular transaction. In various examples, transaction data 270 may describe a purchase amount, a time at which the transaction occurred, a delivery address, a quantity of the item purchased, etc. [(that is, a “particular transaction” occurs at a “particular time,” which detect an observation overlap between two or more devices)]; still further, Butler, fig. 6, teaches an example architecture including multiple camera sensors [Examiner annotations are in dashed-line text boxes]:
PNG
media_image2.png
592
599
media_image2.png
Greyscale
Butler 11:61 to 12:2 teaches ” one or more sensors 630 such as, for example, one or more position sensors, image sensors, and/or motion sensors. An image sensor 632 is shown in FIG. 6. Some examples of the architecture 600 may include multiple image sensors 632. For example, a panoramic camera system may comprise multiple image sensors 632 [(that is, between two or more devices)] resulting in multiple images and/or video frames that may be stitched and may be blended to form a seamless panoramic output [(that is, detect an observation overlap)]”) . . . ;
in response to the observation overlap among a plurality of devices (Butler, Fig. 1, teaches “components of a fraud detection system 102 [Examiner annotations in dashed-line text boxes]:”
PNG
media_image6.png
587
860
media_image6.png
Greyscale
Butler 3:59-64 teaches “fraud detection system 102 is implemented by computing devices of online retailer 114 and/or is provided as a service by another computing device accessible by online retailer 114 over network 104 [(that is, plurality of devices)]. Fraud detection system 102 is effective to evaluate a particular transaction”; further Butler 4:49-54 teaches “[t]ransaction data 270 (e.g., xt), is a vector describing a particular transaction. In various examples, transaction data 270 may describe a purchase amount, a time at which the transaction occurred, a delivery address, a quantity of the item purchased, etc. [(that is, a “particular transaction” occurs at a “particular time,” which is in response to an observation overlap among a plurality of devices)]”),
identify a consensus between Artificial Intelligence (AI) or Machine Intelligence (ML) model inferences made based upon data collected by the plurality of devices (Butler, Figure 2A, teaches prediction models [Examiner annotations in dashed-line text boxes]:
PNG
media_image7.png
781
763
media_image7.png
Greyscale
Butler 4:49-54 teaches “[t]ransaction data 270 (e.g., xt), [(as a function of time)] is a vector describing a particular transaction [that] may describe a purchase amount, a time at which the transaction occurred, a delivery address, a quantity of the item purchased, etc. [(that is, identify a consensus between Artificial Intelligence (AI) or Machine Intelligence (ML) model inferences made based upon data collected by the plurality of devices)]”), (Butler 4:28-22 teaches “instructions are effective to program fraud detection system 102 to find an optimal set of weights for a given set of constraints (action 150) for the various machine learning models of fraud detection system 102 [(that is, “find an optimal set of weights” is by applying a weight to an IA/ML model inference)]”) based, at least in part, upon a confidence of the AI/ML model inference (Butler 4:56-66 teaches “[e]ach of prediction models 202a, 202b, . . . , 202n can receive the same transaction data 270 as input and outputs a confidence score indicating a confidence that the transaction data 270 represents ( or does not represent, depending on the implementation) fraud. The confidence scores of each of prediction models 202a, 202b, . . . , 202n are sent to a combiner 240 ( e.g., a sigmoid function) that may normalize and/or combine the scores to generate a weighted average (prediction 280). The weighted average may represent a consensus of the prediction models 202a, 202b, . . . , 202n [(that is, based, at least in part, upon a confidence of the AI/ML model inference)]”);
* * *
in response to the consensus,
characterize the data as reference data (Butler 5:13-19 teaches “[l]abeled ground truth data (e.g., historical transaction data that is labeled as ‘fraudulent’ or ‘non-fraudulent’ [(that is, to “label” is to characterize the data as reference data)] and the associated prediction value by fraud detection system 102) may be received at any desired cadence. For example, labeled ground truth data [(that is, “labeled ground truth data” is in response to the consensus, characterize the data as reference data)] may be received on a daily basis and/or as such data is received (from a credit card company, for example)”); and
re-train the AI/ML model using the reference data (Butler 3:31-35 teaches “various machine learning models described herein may be automatically updated [(that is, re-train an AI model)] in a dynamic fashion based on the labeled training data (e.g., labeled ground truth data), as such training data becomes available [(that is, re-train an AI model using the reference data)]”).
Though Butler teaches multiple position sensors, image sensors, and motion sensors, where a panoramic camera system may comprise multiple image sensors to provide a panoramic output in which image geometry information can be captured, Butler, however, does not explicitly teach –
* * *
[detect an observation overlap] . . . by determining a focal length and a pose of each of the two or more devices, wherein each of the two or more devices comprises an optical camera;
* * *
But Noble teaches –
* * *
[detect an observation overlap] . . . by determining a focal length and a pose of each of the two or more devices (Noble, Fig. 3, teaches a stereoscopic camera 330 [Examiner annotations in dashed-line text boxes]:
PNG
media_image4.png
891
800
media_image4.png
Greyscale
Nobel ¶ 0075 teaches “[t]he cameras must also be positioned so as to be capturing overlapping fields of view. Calibration of the scene can be initially performed by placing an object having a predefined calibration pattern in the scene to be imaged. Upon capturing stereo images [(that is, detect an observation overlap)], the calibration pattern can be used to identify corresponding pixels of the cameras and also to identify other parameters of the simulated 3D scene, such as the rotation and shift in three dimensions between the cameras [(that is, “rotation and shift” for the cameras is determining . . . a pose of each of the two or more devices)], focal lengths [(that is, determining a focal length . . . of each of the two or more devices)], distortion etc. Using these parameters, the three dimensional coordinates of objects can be calculated within the scene [(that is, detecting an observation overlap] . . . by determining a focal length and a pose of each of the two or more devices)]”);
* * *
Butler and Noble are from the same or similar field of endeavor. Butler teaches prediction models can receive transaction data and output a confidence score relating to the transaction data. Noble teaches registering a position and an orientation of one or more cameras in a camera imaging system.
Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify Butler pertaining to model prediction confidence based on transaction data with the stereoscopic cameras each having focal lengths and orientations of Noble.
The motivation to do so is because, with “monitoring and surveillance systems, it is often necessary to monitor a scene from different perspectives. This is typically achieved by positioning multiple cameras at different positions and orientations throughout the scene. In some applications , such as vehicle and driver monitoring systems, it is advantageous to be able to track and map the positions of objects from the field of view of one camera to another. In these applications , it is necessary to know the relative positions and orientations of each camera so that an accurate mapping or projection of the object position between each camera view can be performed.” (Noble ¶ 0004).
Though Butler and Noble teach the features of determining ground truths based on image data for machine learning applications, the combination of Butler and Noble, however, does not explicitly teach -
* * *
modify the confidence based upon drift detected during operation of an AI/ML model used to make the AI/ML model inference;
in response to a determination that the drift is greater than a threshold value, reduce the confidence proportionally to the drift: and
* * *
But Walters teaches -
* * *
modify the confidence based upon drift detected (Walters ¶ 0185 teaches “the model may be corrected (updated) based on detected drift. Correcting the model may include model training and/or hyperparameter tuning, consistent with disclosed embodiments. Correcting the model may be involve model training or hyperparameter tuning using the received event data and/or other data”) during operation of an AI/ML model used to make the AI/ML model inference (Walters ¶ 0010 teaches “detecting data drift based on the predicted data and the event data [(that is, “the predicted data” is during operation of the AI/ML model)]”; Walters ¶ 0182 teaches “the duration and/or periodicity of the schedule may be based on one or more aspects of the disclosed embodiments, such as a characteristic of the training data, predicted data, or event data (e.g., variance, sampling rate, detecting a measured value falls inside or outside a particular range or above or below particular threshold, etc.), detecting a correction in the predictive model, or other such feature of the disclosed embodiments”;
[Examiner notes that to correct the predictive model is due to modify the confidence based upon drift detected)];
in response to a determination that the drift is greater than a threshold value (Walters ¶ 0182 teaches “the duration and/or periodicity of the schedule may be based on one or more aspects of the disclosed embodiments, such as a characteristic of the training data, predicted data, or event data (e.g., variance, sampling rate, detecting a measured value [(that is, confidence)] falls inside or outside a particular range or above or below particular threshold, etc.), detecting a correction in the predictive model, or other such feature of the disclosed embodiments”; Walters ¶ 0184 teaches “detecting data drift is a based on a comparison of predicted data to event data to determine a difference between predicted data and event data”; Walters ¶ 0184 teaches that “detecting data drift at step 1812 may be based on at least one of a least squares error method, a regression method, a correlation method, or other known statistical method. In some embodiments, the difference is determined using at least one of a Mean Absolute Error, a Root Mean Squared Error, a percent good classification, or the like. In some embodiments, detecting a difference between predicted data and event data includes determining whether a difference between generated data and event data meets or exceeds a threshold difference. In some embodiments, detecting data drift includes determining a difference between the data profile of the predicted data and the data profile of the event data. For example, drift may be detected based on a difference between the covariance matrix of the predicted data and a covariance matrix of the event data”), reduce the confidence proportionally to the drift (Walters ¶ 0202 teaches “a service to provide a model to a remote device, detect data drift in a stored version of the provided model, and notify the remote device that the provided model should be updated”; see also Walters ¶ 0185, which teaches “the model may be corrected (updated) based on detected drift. Correcting the model may include model training and/or hyperparameter tuning, consistent with disclosed embodiments. Correcting the model may be involve model training or hyperparameter tuning using the received event data and/or other data [(that is, reducing the confidence proportionally to the drift)])”); and
* * *
Butler, Noble, and Walters are from the same or similar field of endeavor. Butler teaches prediction models can receive transaction data and output a confidence score relating to the transaction data. Noble teaches registering a position and an orientation of one or more cameras in a camera imaging system. Walters teaches to efficiently identify data drift early, before spending money, resources, or time on an obsolete model.
Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify the combination of Butler and Noble pertaining to model prediction confidence based on transaction data from multiple cameras with drift detection leading to model obsolescence of Walters.
The motivation to do so is to “provide advantages over the conventional approaches . . . by detecting data drift before models fail, become obsolete, or need to be retrained.” (Walters ¶ 0009).
Examiner notes that the Applicant’s preamble does not afford patentable weight to the Applicant’s claims because the claim preamble is not “necessary to give life, meaning, and vitality” to the claim. Moreover, because the Applicant’s preamble merely states the purpose or intended use of the invention rather than any distinct definition of any of the claimed invention’s limitations, the preamble is not considered a limitation and is of no significance to claim construction.
Regarding claim 16, the combination of Butler, Noble, and Walters teaches all of the limitations of claim 15, as described above in detail.
Butler teaches -
wherein at least a subset of the plurality of devices comprises different sensor hardware (Butler, Fig. 6, teaches an example architecture of a computing device [Examiner annotations in dashed-line text boxes]:”
PNG
media_image8.png
599
602
media_image8.png
Greyscale
Butler 10:33-40 teaches “[i]t will be appreciated that not all devices will include all of the components of the architecture 600 and some user devices [112] may include additional components not shown in the architecture 600 [(that is, wherein at least a subset of the plurality of devices comprises different sensor hardware)]”), and
wherein at least a subset of the AI/ML model inferences is made by different types of AI/ML models (Butler 4:44-49 teaches “[e]ach of prediction models 202 a, 202 b, . . . , 202 n may use a different machine learning algorithm (e.g., logistic regression, random forest, a neural network, etc. [(that is, different types of AI/ML models)]) in order to use a plurality of different classification methods to generate a prediction as to whether a particular purchase is fraudulent or legitimate [(that is, wherein at least a subset of the AI/ML model inferences is made by different types of AI/ML models)]”).
Regarding claim 18, Butler teaches [a] method, comprising:
detecting an observation overlap between a plurality of devices (Butler, Fig. 1, teaches “components of a fraud detection system 102 [Examiner annotations in dashed-line text boxes]:”
PNG
media_image1.png
610
867
media_image1.png
Greyscale
Butler 3:59-64 teaches “fraud detection system 102 is implemented by computing devices of online retailer 114 and/or is provided as a service by another computing device accessible by online retailer 114 over network 104 [(that is, a plurality of devices)]. Fraud detection system 102 is effective to evaluate a particular transaction”; further Butler 4:49-54 teaches “[t]ransaction data 270 (e.g., xt), is a vector describing a particular transaction. In various examples, transaction data 270 may describe a purchase amount, a time at which the transaction occurred, a delivery address, a quantity of the item purchased, etc. [(that is, a “particular transaction” occurs at a “particular time,” which detecting an observation overlap between a plurality of devices)]; still further, Butler, fig. 6, teaches an example architecture including multiple camera sensors [Examiner annotations are in dashed-line text boxes]:
PNG
media_image2.png
592
599
media_image2.png
Greyscale
Butler 11:61 to 12:2 teaches ” one or more sensors 630 such as, for example, one or more position sensors, image sensors, and/or motion sensors. An image sensor 632 is shown in FIG. 6. Some examples of the architecture 600 may include multiple image sensors 632. For example, a panoramic camera system may comprise multiple image sensors 632 [(that is, between two or more devices)] resulting in multiple images and/or video frames that may be stitched and may be blended to form a seamless panoramic output [(that is, detecting an observation overlap)]”) . . . ;
identifying a consensus between Artificial Intelligence (AI) or Machine Intelligence (ML) model inferences made based upon data collected by the plurality of devices in response to an observation overlap among a plurality of devices (Butler, Figure 2A, teaches prediction models [Examiner annotations in dashed-line text boxes]:
PNG
media_image7.png
781
763
media_image7.png
Greyscale
Butler 4:49-54 teaches “[t]ransaction data 270 (e.g., xt), [(as a function of time)] is a vector describing a particular transaction [that] may describe a purchase amount, a time at which the transaction occurred, a delivery address, a quantity of the item purchased, etc. [(that is, identifying a consensus between Artificial Intelligence (AI) or Machine Intelligence (ML) model inferences made based upon data collected by the plurality of devices in response to an observation overlap among a plurality of devices)]”), by applying a weight to an AI/ML model inference (Butler 4:28-22 teaches “instructions are effective to program fraud detection system 102 to find an optimal set of weights for a given set of constraints (action 150) for the various machine learning models of fraud detection system 102 [(that is, “find an optimal set of weights” is by applying a weight to an IA/ML model inference)]”) based, at least in part, upon a confidence of the AI/ML model inference (Butler 4:56-66 teaches “[e]ach of prediction models 202a, 202b, . . . , 202n can receive the same transaction data 270 as input and outputs a confidence score indicating a confidence that the transaction data 270 represents ( or does not represent, depending on the implementation) fraud. The confidence scores of each of prediction models 202a, 202b, . . . , 202n are sent to a combiner 240 ( e.g., a sigmoid function) that may normalize and/or combine the scores to generate a weighted average (prediction 280). The weighted average may represent a consensus of the prediction models 202a, 202b, . . . , 202n [(that is, based, at least in part, upon a confidence of the AI/ML model inference)]”);
* * *
and re-training an AI/ML model using the data to improve inference confidence scoring and mitigate drift (Butler 3:31-35 teaches “various machine learning models described herein may be automatically updated [(that is, re-train an AI model)] in a dynamic fashion based on the labeled training data (e.g., labeled ground truth data), as such training data becomes available [(that is, re-training an AI model using the reference data)]”).
Though Butler teaches multiple position sensors, image sensors, and motion sensors, where a panoramic camera system may comprise multiple image sensors to provide a panoramic output in which image geometry information can be captured, Butler, however, does not explicitly teach –
* * *
[detect an observation overlap] . . . by determining a focal length and a pose of each of the two or more devices, wherein each of the two or more devices comprises an optical camera;
* * *
But Noble teaches –
* * *
[detect an observation overlap] . . . by determining a focal length and a pose of each of the two or more devices (Noble, Fig. 3, teaches a stereoscopic camera 330 [Examiner annotations in dashed-line text boxes]:
PNG
media_image4.png
891
800
media_image4.png
Greyscale
Nobel ¶ 0075 teaches “[t]he cameras must also be positioned so as to be capturing overlapping fields of view. Calibration of the scene can be initially performed by placing an object having a predefined calibration pattern in the scene to be imaged. Upon capturing stereo images [(that is, detect an observation overlap)], the calibration pattern can be used to identify corresponding pixels of the cameras and also to identify other parameters of the simulated 3D scene, such as the rotation and shift in three dimensions between the cameras [(that is, “rotation and shift” for the cameras is determining . . . a pose of each of the two or more devices)], focal lengths [(that is, determining a focal length . . . of each of the two or more devices)], distortion etc. Using these parameters, the three dimensional coordinates of objects can be calculated within the scene [(that is, detecting an observation overlap] . . . by determining a focal length and a pose of each of the two or more devices)]”);
* * *
Butler and Noble are from the same or similar field of endeavor. Butler teaches prediction models can receive transaction data and output a confidence score relating to the transaction data. Noble teaches registering a position and an orientation of one or more cameras in a camera imaging system.
Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify Butler pertaining to model prediction confidence based on transaction data with the stereoscopic cameras each having focal lengths and orientations of Noble.
The motivation to do so is because, with “monitoring and surveillance systems, it is often necessary to monitor a scene from different perspectives. This is typically achieved by positioning multiple cameras at different positions and orientations throughout the scene. In some applications , such as vehicle and driver monitoring systems, it is advantageous to be able to track and map the positions of objects from the field of view of one camera to another. In these applications , it is necessary to know the relative positions and orientations of each camera so that an accurate mapping or projection of the object position between each camera view can be performed.” (Noble ¶ 0004).
Though Butler and Noble teach the features of determining ground truths based on image data for machine learning applications, the combination of Butler and Noble, however, does not explicitly teach -
* * *
modifying the confidence based upon drift detected during operation of an AI/ML model used to make the AI/ML model inference;
in response to a determination that the drift is greater than a threshold value, reducing the confidence proportionally to the drift: and
* * *
But Walters teaches -
* * *
modifying the confidence based upon drift detected (Walters ¶ 0185 teaches “the model may be corrected (updated) based on detected drift. Correcting the model may include model training and/or hyperparameter tuning, consistent with disclosed embodiments. Correcting the model may be involve model training or hyperparameter tuning using the received event data and/or other data”) during operation of an AI/ML model used to make the AI/ML model inference (Walters ¶ 0010 teaches “detecting data drift based on the predicted data and the event data [(that is, “the predicted data” is during operation of the AI/ML model)]”; Walters ¶ 0182 teaches “the duration and/or periodicity of the schedule may be based on one or more aspects of the disclosed embodiments, such as a characteristic of the training data, predicted data, or event data (e.g., variance, sampling rate, detecting a measured value falls inside or outside a particular range or above or below particular threshold, etc.), detecting a correction in the predictive model, or other such feature of the disclosed embodiments”;
[Examiner notes that to correct the predictive model is due to modify the confidence based upon drift detected)];
in response to a determination that the drift is greater than a threshold value (Walters ¶ 0182 teaches “the duration and/or periodicity of the schedule may be based on one or more aspects of the disclosed embodiments, such as a characteristic of the training data, predicted data, or event data (e.g., variance, sampling rate, detecting a measured value [(that is, confidence)] falls inside or outside a particular range or above or below particular threshold, etc.), detecting a correction in the predictive model, or other such feature of the disclosed embodiments”; Walters ¶ 0184 teaches “detecting data drift is a based on a comparison of predicted data to event data to determine a difference between predicted data and event data”; Walters ¶ 0184 teaches that “detecting data drift at step 1812 may be based on at least one of a least squares error method, a regression method, a correlation method, or other known statistical method. In some embodiments, the difference is determined using at least one of a Mean Absolute Error, a Root Mean Squared Error, a percent good classification, or the like. In some embodiments, detecting a difference between predicted data and event data includes determining whether a difference between generated data and event data meets or exceeds a threshold difference. In some embodiments, detecting data drift includes determining a difference between the data profile of the predicted data and the data profile of the event data. For example, drift may be detected based on a difference between the covariance matrix of the predicted data and a covariance matrix of the event data”), reducing the confidence proportionally to the drift (Walters ¶ 0202 teaches “a service to provide a model to a remote device, detect data drift in a stored version of the provided model, and notify the remote device that the provided model should be updated”; see also Walters ¶ 0185, which teaches “the model may be corrected (updated) based on detected drift. Correcting the model may include model training and/or hyperparameter tuning, consistent with disclosed embodiments. Correcting the model may be involve model training or hyperparameter tuning using the received event data and/or other data [(that is, reducing the confidence proportionally to the drift)])”); and
* * *
Butler, Noble, and Walters are from the same or similar field of endeavor. Butler teaches prediction models can receive transaction data and output a confidence score relating to the transaction data. Noble teaches registering a position and an orientation of one or more cameras in a camera imaging system. Walters teaches to efficiently identify data drift early, before spending money, resources, or time on an obsolete model.
Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify the combination of Butler and Noble pertaining to model prediction confidence based on transaction data from multiple cameras with drift detection leading to model obsolescence of Walters.
The motivation to do so is to “provide advantages over the conventional approaches . . . by detecting data drift before models fail, become obsolete, or need to be retrained.” (Walters ¶ 0009).
Examiner notes that the Applicant’s preamble does not afford patentable weight to the Applicant’s claims because the claim preamble is not “necessary to give life, meaning, and vitality” to the claim. Moreover, because the Applicant’s preamble merely states the purpose or intended use of the invention rather than any distinct definition of any of the claimed invention’s limitations, the preamble is not considered a limitation and is of no significance to claim construction.
Regarding claim 19, the combination of Butler, Noble, and Walters teaches all of the limitations of claim 18, as described above in detail.
Butler teaches -
wherein at least a subset of the plurality of devices comprises different hardware (Butler, Fig. 6, teaches an example architecture of a computing device [Examiner annotations in dashed-line text boxes]:”
PNG
media_image8.png
599
602
media_image8.png
Greyscale
Butler 10:33-40 teaches “[i]t will be appreciated that not all devices will include all of the components of the architecture 600 and some user devices [112] may include additional components not shown in the architecture 600 [(that is, wherein at least a subset of the plurality of devices comprises different sensor hardware)]”), and
wherein at least a subset of the AI/ML model inferences is made by different types of AI/ML models (Butler 4:44-49 teaches “[e]ach of prediction models 202 a, 202 b, . . . , 202 n may use a different machine learning algorithm (e.g., logistic regression, random forest, a neural network, etc. [(that is, different types of AI/ML models)]) in order to use a plurality of different classification methods to generate a prediction as to whether a particular purchase is fraudulent or legitimate [(that is, wherein at least a subset of the AI/ML model inferences is made by different types of AI/ML models)]”).
9. Claims 9, 10 and 21-23 are rejected under 35 U.S.C. § 103 as being unpatentable over US Patent 11276023 to Butler et al. [hereinafter Butler] in view of US Published Application 20190164310 to Noble et al. [hereinafter Noble], US Published Application 20200012900 to Walters et al. [hereinafter Walters], and US Published Application 20220126864 to Moustafa et al. [hereinafter Moustafa].
Regarding claim 9, the combination of Butler, Noble, and Walters teaches all of the limitations of claim 1, as described above in detail.
Though Butler, Noble, and Walters teaches automatically updating, based on labeled ground truth data as such training data becomes available to determine whether a transaction is fraudulent or not though sensor data, the combination of Butler, Noble, and Walters, however, does not explicitly teach –
wherein to identify the consensus, the program instructions, upon execution, further cause the IHS to apply a weight to an AI/ML model inference based, at least in part, upon a hardware characteristic of a sensor employed to capture data used to make the AI/ML model inference.
But Moustafa teaches -
wherein to identify the consensus, the program instructions, upon execution, further cause the IHS to apply a weight to an AI/ML model inference based, at least in part, upon a hardware characteristic of a sensor employed to capture data used to make the AI/ML model inference (Moustafa, Fig. 124B, teaches learning weights for sensors under different contexts [Examiner annotations in dashed-line text boxes]:
PNG
media_image9.png
889
1280
media_image9.png
Greyscale
Moustafa ¶ 0804 teaches an “action 12462 may be produced in the form of sensor weights 12464 to use during sensor fusion [(that is, apply a weight to an AI/ML model inference)]”; Moustafa ¶ 0788 teaches “[f]usion algorithm 12102 may be any suitable machine learning algorithm to analyze sensor data 12104, corresponding context information 12106 (as ground truth), and corresponding object locations 12108 (as ground truth). The sensor data 12104 may be captured from sensors of one or more autonomous vehicles [(that is, based . . . upon a hardware characteristic of a sensor employed to capture data used to make the AI/ML model inference)]”).
Butler, Noble, Walters, and Moustafa are from the same or similar field of endeavor.
Butler teaches prediction models can receive transaction data and output a confidence score relating to the transaction data. Noble teaches registering a position and an orientation of one or more cameras in a camera imaging system. Walters teaches to efficiently identify data drift early, before spending money, resources, or time on an obsolete model. Moustafa teaches models relied upon by the autonomous vehicle's systems are trained on data sets describing other preceding trips, whose ground truth may also be based on the perspective of the vehicle and the results observed or sensed through its sensors.
Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify the combination of Butler, Noble, and Walters pertaining to model prediction confidence with the object detection model based on various sensor data having respective hardware characteristics of Moustafa.
The motivation to do so is for “an improved example autonomous system, participating sensor devices (e.g., connected vehicles) may utilize additional machine learning techniques to learn from such attributes as the time sensitivity of the data, availability of data transport options (e.g., cellular, wi-fi, the transport technology available (e.g., 4G, 5G), and the cost and available throughput of the channels) at different locations and times of the day, and other usages and preferences of vehicle users (and corresponding network and compute usage based on these usages (e.g., in-vehicle media streaming and gaming, etc.,) to determine an optimized option for when and how to transport what data to the cloud or another connected device.” (Moustafa ¶ 0225).
Regarding claim 10, the combination of Butler, Noble, and Walters teaches all of the limitations of claim 1, as described above in detail.
Though Butler, Noble, and Walters teaches automatically updating, based on labeled ground truth data as such training data becomes available to determine whether a transaction is fraudulent or not based on sensor data, the combination of Butler, Noble, and Walters, however, does not explicitly teach –
wherein to identify the consensus, the program instructions, upon execution, further cause the IHS to
apply a weight to an AI/ML model inference based, at least in part, upon an observational quality of a sensor employed to capture data used to make the AI/ML model inference.
But Martin teaches -
wherein to identify the consensus, the program instructions, upon execution, further cause the IHS to
apply a weight to an AI/ML model inference based, at least in part, upon an observational quality of a sensor employed to capture data used to make the AI/ML model inference (Moustafa ¶ 0785 teaches “sensor fusion improvement is achieved by adapting weights for each sensor based on the context. The SNR (and consequently the overall variance) [(that is, “SNR” is an observational quality)] may be improved by adaptively weighting data from the sensors differently based on the context [(that is, “adaptively weighting” is apply a weight to an AI/ML model inference based, at least in part, upon an observational quality of a sensor employed to capture data used to make the AI/ML model inference)]”).
Butler, Noble, Walters, and Moustafa are from the same or similar field of endeavor. Butler teaches prediction models can receive transaction data and output a confidence score relating to the transaction data. Noble teaches registering a position and an orientation of one or more cameras in a camera imaging system. Walters teaches to efficiently identify data drift early, before spending money, resources, or time on an obsolete model. Moustafa teaches models relied upon by the autonomous vehicle's systems are trained on data sets describing other preceding trips, whose ground truth may also be based on the perspective of the vehicle and the results observed or sensed through its sensors.
Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify the combination of Butler, Noble, and Walters pertaining to model prediction confidence with the object detection model based on various sensor data having respective hardware characteristics of Moustafa.
The motivation to do so is for “an improved example autonomous system, participating sensor devices (e.g., connected vehicles) may utilize additional machine learning techniques to learn from such attributes as the time sensitivity of the data, availability of data transport options (e.g., cellular, wi-fi, the transport technology available (e.g., 4G, 5G), and the cost and available throughput of the channels) at different locations and times of the day, and other usages and preferences of vehicle users (and corresponding network and compute usage based on these usages (e.g., in-vehicle media streaming and gaming, etc.,) to determine an optimized option for when and how to transport what data to the cloud or another connected device.” (Moustafa ¶ 0225).
Regarding claim 21, the combination of Butler, Noble, and Walters teaches all of the limitations of claim 8, as described above in detail.
Though Butler, Noble, and Walters teach the features of determining ground truths based on image data for machine learning applications, the combination of Butler and Noble, however, does not explicitly teach –
wherein the program instructions, upon execution, further cause the IHS to apply an AI/ML characterization model to the data to identify the object.
But Moustafa teaches -
wherein the program instructions, upon execution, further cause the IHS to apply an AI/ML characterization model to the data to identify the object (Moustafa , Fig. 49, teaches a flow of data categorization [(that is, characterization)], scoring, and handling [Examiner annotations in dashed-line text boxes]:
PNG
media_image10.png
856
1286
media_image10.png
Greyscale
Moustafa ¶ 0433 teaches that “[i]nformation about an instance of the detected objects (e.g., the detected objects as well as the context) may be provided to category assigner 4936 [(that is, an AI/ML characterization model)] which selects one or more categories for the instance (such as one or more of the categories described above or other suitable categories [(that is, “categorize” is identify the object)]”;
[Examiner notes that the plain meaning of the term “characterization” is to describe descriptive features of an object or environment]
Butler, Noble, Walters and Moustafa are from the same or similar field of endeavor. Butler teaches prediction models can receive transaction data and output a confidence score relating to the transaction data. Noble teaches registering a position and an orientation of one or more cameras in a camera imaging system. Walters teaches to efficiently identify data drift early, before spending money, resources, or time on an obsolete model. Moustafa teaches models relied upon by the autonomous vehicle's systems are trained on data sets describing other preceding trips, whose ground truth may also be based on the perspective of the vehicle and the results observed or sensed through its sensors.
Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify the combination of Butler, Noble, and Walters, pertaining to model prediction confidence based on transaction data from multiple cameras alleviating drift detection leading to model obsolescence with the object characterization and identification of Moustafa.
The motivation to do so is for “an improved example autonomous system, participating sensor devices (e.g., connected vehicles) may utilize additional machine learning techniques to learn from such attributes as the time sensitivity of the data, availability of data transport options (e.g., cellular, wi-fi, the transport technology available (e.g., 4G, 5G), and the cost and available throughput of the channels) at different locations and times of the day, and other usages and preferences of vehicle users (and corresponding network and compute usage based on these usages (e.g., in-vehicle media streaming and gaming, etc.,) to determine an optimized option for when and how to transport what data to the cloud or another connected device.” (Moustafa ¶ 0225).
Regarding claim 22, the combination of Butler, Noble, Walters, and Moustafa teaches all of the limitations of claim 21, as described above in detail.
Moustafa teaches -
wherein the AI/ML characterization model comprises a weather condition or a lighting condition (Moustafa ¶ 0436 & Fig. 49 (above) teaches the “location of captured data may be used by the autonomous vehicle computing system 4902 or the remote computing system 4904 to obtain other contextual data associated with capture of the data, such as the weather [(that is, a weather condition)], traffic, pedestrian flow, and so on (e.g., from a database or other service by using the location as input) [(that is, the AI/ML characterization model comprises a weather condition)]”).
Regarding claim 23, the combination of Butler, Noble, Walters, and Briggs teaches all of the limitations of claim 21, as described above in detail.
Moustafa teaches -
wherein the AI/ML characterization model comprises a weather condition (Moustafa ¶ 0436 & Fig. 49 (above) teaches the “location of captured data may be used by the autonomous vehicle computing system 4902 or the remote computing system 4904 to obtain other contextual data associated with capture of the data, such as the weather [(that is, a weather condition)], traffic, pedestrian flow, and so on (e.g., from a database or other service by using the location as input) [(that is, the AI/ML characterization model comprises a weather condition)]”).
Response to Arguments
10. Examiner has fully considered Applicant’s arguments, and responds below, accordingly.
Claim Rejection – 35 U.S.C. § 101
11. Applicant submits that “Claims 1, 4-10, 14-16, 18, 19, 21, and 22 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more.
Applicant requests that this rejection be held in abeyance until the outcome of the below
claims is determined by the Examiner.” (Response at p. 6).
Examiner’s Response:
Examiner notes the Applicant’s request that the rejection be held in abeyance.
Claim Rejection – 35 U.S.C. § 103
12. Applicant submits that “[a]ssuming for argument purposes that Noble teaches the determination of focal length and pose of two cameras (devices), Noble does not teach using these parameters (focal length and pose) in the particular manner as recited in claim 1. If Noble uses these parameters (focal length and pose), it is only to calculate the three dimensional coordinates of objects within the scene. This is clearly not a clear and affirmative teaching of using the parameters to detect an observation overlap between the cameras, as required by claim 1. Moreover, because these teaching are lacking, there can be no teaching of claim 1's subsequent limitations which rely on said detection of observation overlap. For example, the identify and modify limitations in claim 1 cannot then be taught, as well as the subsequent limitations, all which are expressly dependent on the detection of the observation overlap limitation.” (Response at pp.7-8 (emphasis added by Examiner)).
Examiner’s Response:
Examiner respectfully disagrees because Applicant relies on language not tethered to the instant claims. For example, Applicant argues that the stereoscopic camera 330 of Noble “is clearly not a clear and affirmative teaching of using the parameters to detect an observation overlap between the cameras, as required by claim 1.” (Response AF at p. 7 (emphasis added by Examiner)). In other words, such language is not positively recited, but simply “determining” optical parameters.
In contrast, the “focal length and pose” of exemplar claim 1 simply recites:
1. An Information Handling system (HIS), comprising:
* * *
detect an observation overlap between two or more devices by determining a focal length and a pose of each of the two or more devices, wherein each of the two or more devices comprises an optical camera;
* * *
(claim 1, lines 1 & 5-7 (emphasis added by Examiner)).
Noble is relied upon as teaching this limitation. Specifically, the Final Action sets out that
Noble ¶ 0075 teaches ‘[t]he cameras must also be positioned so as to be capturing overlapping fields of view. Calibration of the scene can be initially performed by placing an object having a predefined calibration pattern in the scene to be imaged. Upon capturing stereo images [(that is, detect an observation overlap)], the calibration pattern can be used to identify corresponding pixels of the cameras and also to identify other parameters of the simulated 3D scene, such as the rotation and shift in three dimensions between the cameras [(that is, "rotation and shift" for the cameras is determining . . . a pose of each of the two or more devices)], focal lengths [(that is, determining a focal length . . . of each of the two or more devices)], distortion etc. Using these parameters, the three dimensional coordinates of objects can be calculated within the scene [(that is, detecting an observation overlap . . . by determining a focal length and a pose of each of the two or more devices)]’).”
Particularly, Noble teaches that “[u]sing these parameters [of the simulated 3D scene] such as the rotation and shift, focal length, distortion, etc. . . . the three dimensional coordinates of objects can be calculated within the scene.”
The plain meaning of the claim term “observation overlap” is the situation where two or more datasets share common observations or time blocks. That is, as between two camera devices, such an “observation overlap” relates to stereoscopy, where two images are taken from different perspectives to create a three-dimensional effect. (see Noble ¶ 0075 & Fig. 3).
In this regard, the broadest reasonable interpretation of the claim term “observation overlap” covers the teachings of Noble, particularly because the “two or more devices” of Noble is a “stereoscopic camera 330,” which is not inconsistent with the Applicant’s disclosure. (MPEP § 2111). Notably, stereoscopy inherently relies on “observation overlap” to render a three-dimensional effect.
Also, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. Where a rejection of a claim is based on two or more references, a reply that is limited to what a subset of the applied references teaches or fails to teach, or that fails to address the combined teaching of the applied references may be considered to be an argument that attacks the reference(s) individually, as is the case here with the cited prior art of Noble. (MPEP § 2145.IV).
Moreover, the rejections set out in the Final Action clearly set forth which claim limitations are taught by each of the prior art references, and the reason why it would be obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant's invention to combine their teachings, and Applicant has not explained why the cited prior art references cannot be combined in the manner set forth in the rejection.
13. Applicant submits that “the combination as proposed in the Office Action is not proper. Unless hindsight is used, there is nothing in Butler that would reasonably motivate one skilled in the art to look to Noble to employ Noble's particular parameters in detecting observation overlap, let alone using the detected observation overlap in combination with the remaining limitations of claim 1. There is simply no motivation to do so.” (Response at pp. 8).
Examiner’s Response:
Applicant’s argument has been fully considered but is not persuasive.
In response to applicant’s argument that the examiner’s conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill as of the effective filing date of the Applicant’s invention, and does not include knowledge gleaned only from the applicant’s disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971).
Examiner respectfully submits that the prior art of Butler and Noble is knowledge which was within the level of ordinary skill as of the effective filing date of the Applicant’s invention, and does not include knowledge gleaned only from the Applicant’s disclosure. As a result, such a reconstructions is proper.
Moreover, the rejections set out in the Final Action clearly set forth which claim limitations are taught by each of the prior art references, and the reason why it would be obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant's invention to combine their teachings, and Applicant has not explained why the cited prior art references cannot be combined in the manner set forth in the rejection.
14. Also, Applicant submits, in relation to Walters, “that ‘correcting’ and ‘updating’ of a model are not teachings of proportionally reducing confidence in relation to drift, nor are ‘model training’ and ‘hyperparameter tuning’. In fact, no proportional reduction is even disclosed or suggested. In other words, there is nothing in the above cited portion of Walters that clearly and affirmatively teaches proportional reduction of confidence in relation to drift.” (Response at p. 8 (emphasis added by Applicant)).
Examiner’s Response:
Examiner respectfully disagrees because “proportionally” reducing confidence in relation to drift has a broadest reasonable interpretation that covers the “correction” of a model as set out by Walters.
1. An Information Handling system (HIS), comprising:
* * *
modify the confidence based upon drift detected during operation of an AI/ML model used to make the AI/ML model inference;
in response to a determination that the drift is greater than a threshold value, reduce the confidence proportionally to the drift; and
* * *
(claim 1, lines 1 & 12-15 (emphasis added by Examiner)).
The plain meaning of “reduce the confidence proportionally” is an amount so as to maintain the accuracy and reliability of measurements of “drift” over time corresponding in size, degree, or intensity. Also, the plain meaning of “drift” pertains to effects impacting measurements, such as instrument aging, environmental conditions, calibration issues, etc. Thus, by adjusting “proportionally” to “drift,” one can ensure that measured values remain consistent and accurate, even when faced with changes in the measurement system. As set out above,:
Walters ¶ 0185, teaches "the model may be corrected (updated) based on detected drift. Correcting the model may include model training and/or hyperparameter tuning, consistent with disclosed embodiments. Correcting the model may be involve model training or hyperparameter tuning using the received event data and/or other data [(that is, reducing the confidence proportionally to the drift)];
Accordingly, the broadest reasonable interpretation of the claim term of “reduce the confidence proportionally” covers the teaching of Walters in relation to model “correction” for consistency and accuracy purposes, which is not inconsistent with the Applicant’s disclosure. (MPEP § 2111).
Also, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. Where a rejection of a claim is based on two or more references, a reply that is limited to what a subset of the applied references teaches or fails to teach, or that fails to address the combined teaching of the applied references may be considered to be an argument that attacks the reference(s) individually, as is the case here with the cited prior art of Walters. (MPEP § 2145.IV).
Moreover, the rejections set out in the Final Action clearly set forth which claim limitations are taught by each of the prior art references, and the reason why it would be obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant's invention to combine their teachings, and Applicant has not explained why the cited prior art references cannot be combined in the manner set forth in the rejection.
Conclusion
15. The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure:
(US Published Application 20210407131 to Kallakuri et al.) teaches recalibrating cameras in a real space for tracking puts and takes of items by subjects.
(US Published Application 20190147220 et al.) teaches detecting objects in video data, comprising: determining object-label probability values for spatial elements of frames of video data using a two-dimensional image classifier; identifying surface elements in a three-dimensional surface element representation of a space observed in the frames of video data that correspond to the spatial elements; and updating object-label probability values for the surface elements based on the object-label probability values for corresponding spatial elements to provide a semantically-labelled three-dimensional surface element representation of objects present in the video data.
(Saxena et al., “Multiagent Sensor Fusion for Connected & Autonomous Vehicles to Enhance Navigation Safety,” IEEE (2019)) teaches a proposed statistical algorithm to output a measure of similarity between local and world interpretations and identifies false negatives (if any) for the local agent. This measure, in turn, can be used to inform the agents to update their kinematic behavior in order to account for any errors in local interpretation.
(Rangesh et al., “Tracking for Autonomous Vehicles using Cameras & LiDARs,” arXiv (2019)) teaches work is a generalization of the MDP framework for multi-object tracking, with some key extensions- First, we track objects across multiple cameras and across different sensor modalities. This is done by fusing object proposals across sensors accurately and efficiently. Second, the objects of interest (targets) are tracked directly in the real world.
(de Silva et al., “Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots,” arXiv (2018)) teaches the problem of LiDAR and imaging data fusion can be approached as a camera pose estimation problem, where the relationship between 3D LIDAR coordinates and 2D image coordinates is characterised by camera parameters such as position, orientation, and focal length. Proposed is an information-theoretic similarity measure to automatically register 2D-Optical imagery with 3D LiDAR scans by searching for a suitable camera transformation matrix. LiDAR and optical image fusion is used in [30] for creating 3D virtual reality models of urban scenes.
16. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to KEVIN L. SMITH whose telephone number is (571) 272-5964. Normally, the Examiner is available on Monday-Thursday 0730-1730.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, KAKALI CHAKI can be reached on 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.L.S./
Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122