Prosecution Insights
Last updated: April 19, 2026
Application No. 17/237,330

METHOD FOR HYBRID MACHINE LEARNING FOR SHRINK PREVENTION SYSTEM

Final Rejection §101§103
Filed
Apr 22, 2021
Examiner
WALTON, CHESIREE A
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Sensormatic Electronics LLC
OA Round
8 (Final)
30%
Grant Probability
At Risk
9-10
OA Rounds
3y 5m
To Grant
58%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
63 granted / 211 resolved
-22.1% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
52 currently pending
Career history
263
Total Applications
across all art units

Statute-Specific Performance

§101
38.8%
-1.2% vs TC avg
§103
48.9%
+8.9% vs TC avg
§102
4.7%
-35.3% vs TC avg
§112
5.6%
-34.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 211 resolved cases

Office Action

§101 §103
Detailed Action The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant The following is a Final Office action to Application Serial Number 17/237,330, filed on April 22, 2021. In response to Examiner’s Office Action of September 26, 2025 Applicant, on December 24, 2025, amended no claims. Claims 1-4, 6-10, 12-16 and 18-21 are pending in this application and have been rejected below. Response to Arguments Applicant’s arguments filed December 24, 2025 have been fully considered but they are not persuasive and/or are moot in view of the revised rejections. Applicant’s arguments will be addressed herein below in the order in which they appear in the response filed December 24, 2025. On Pg. 9-10 of the Remarks, with respect to the claim rejection(s) under 35 U.S.C. § 101, Applicant states The Office Action's characterization of the claims skips the actual claim recitations and their technical import. The claimed pipeline does not tell a retailer how to price or sell or "organize" people; rather, the claimed pipeline builds and evaluates a particular machine learning model architecture with an ordered chaining constraint and an objective function (lower margins of error compared to any single machine learning algorithm). Courts have repeatedly held claims directed to specific data structures or processing architectures are not abstract "organizing human activity," e.g., Enfish (self-referential table) and McRO (specific rules improving automated animation). See, e.g., MPEP § 2106.06(b). Here, the claims likewise recite a specific model- building architecture and evaluation regime. Absent a proper element-by-element mapping to the "organizing human activities" category, the Step 2A Prong One classification fails and should be withdrawn . In response, Examiner respectfully disagrees. The aforementioned procedures are not improvements to a problem in the software arts, a technology or technological field. The shrink prevention analytics is a judicial exception (i.e. abstract idea). The claimed invention is executed by computer components performing computer functions. Unlike Enfish and McRo which recited claims that asserted improvements to the configuration of computer memory and made improvements in computer-related technology, the present claims recite additional element of using computer components to perform each step. The “database”, “memory”, “processor”, and “computer readable medium” is recited at a high-level of generality, such that it amounts no more than mere instructions to apply the exception using a computer component. The general use of a machine learning technique does not provide a meaningful limitation to transform the abstract idea into a practical application. Therefore, currently, the machine learning is solely used a tool to perform the instructions of the abstract idea/ Examiner asserts, regardless of the complexity of the data analysis and/or processing, without recitation of improvements to the functioning of the technology, technological field and/or computer-related technology (i.e. software), the steps outlined in the claimed invention to create competency learning maps amount to no more than mere instructions to implement the idea on a general purpose computer. Applicant has not identified anything in the claimed invention that shows or even submits the technology is being improved or there was a problem in the technology that the claimed invention solves. On Pg. 10 of the Remarks, with respect to the claim rejection(s) under 35 U.S.C. § 101, Applicant states the claims integrate it into a practical application under Step 2A Prong Two by improving a technical field: machine learning model construction and evaluation. The pipeline imposes meaningful constraints and yields a concrete, measurable improvement (lower margin of error than any single algorithm). Indicators of practical application per MPEP § 2106.04(d)(1) include "an improvement in the functioning of a computer, or an improvement to other technology or technical field." The claimed architecture improves predictive model performance, i.e., the technology of model orchestration. In response, the claim limitations of formatting the extracted data and subdividing the data into a training dataset and a testing dataset are included in the abstract idea of “Methods of Organizing Human Activities”- fundamental mitigative economic principles. The train, test limitations do not provide technological details as to how the machine learning is performed other than mere recitation of being iterative (feeding outputs back to inputs) and general recitation of hybrid machine learning that allegedly "provides a lower margin of error" without any technological computational details of how such lower margin of error is achieved. Thus when tested per MPEP 2106.05(f)(1) such additional elements fail to provide an actual technological solution to integrate the abstract idea into a practical application (Step 2A prong two) or provide significantly more (Step 2B). Rather such additional elements when tested per MPEP 2106.05(f)(2) (i) merely the underlining abstract apply business method and its underlining mathematical algorithm on a general purpose computer, which also fail to provide an actual technological solution to integrate the abstract idea into a practical application (Step 2A prong two) or provide significantly more (Step 2B). Specifically, the number of algorithms, e.g., "linear regression, logistic regression, decision tree, random forest, dimensionality reduction algorithms, or gradient boosting algorithms…the statistical and other algorithms are considered mathematical concepts. On Pg. 11-12 of the Remarks, with respect to the claim rejection(s) under 35 U.S.C. § 103, Applicant argues prior art does no disclose,: wherein the testing of combinations of the plurality of machine learning algorithms includes feeding outputs of a first machine learning algorithm as inputs to a second machine learning algorithm in a particular order. More specifically, the asserted paragraph does not disclose or suggest testing combinations of a plurality of machine learning algorithms by feeding outputs of a first machine learning algorithm as inputs to a second machine learning algorithm in a particular order. And select two or more machine learning algorithms from the plurality of machine learning algorithms to form a hybrid machine learning model, wherein the hybrid machine learning model feeds the output of a first selected machine learning algorithm as inputs to a second selected machine learning algorithm in a particular order and the hybrid machine learning model provides a lower margin of error than the margin of error achieved from any one of the plurality of machine learning algorithms individually. In response, the combination of the prior art discloses these elements and disclosed in 103 analysis below. Lei is relied upon for the feeding of machine learning models .Par. 65- “ FIG. 2, embodiments use the optimized feature sets[Par. 19} as input to forecasting algorithms to generate forecasting models. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 6-10, 12-16, and 18-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1-4, 6 and 19-21 are directed to an apparatus for performing analytics using machine learning for shrink prevention. Claims 7-10 and 12 are directed to a method for performing analytics using machine learning for shrink prevention. Claims 13-16 and 18 are directed to an article of manufacture for performing analytics using machine learning for shrink prevention. Claim 1 recites an apparatus for performing analytics using machine learning for shrink prevention, Claim 7 recites a method for performing analytics using machine learning for shrink prevention and Claim 13 recites an article of manufacture for performing analytics using machine learning for shrink prevention, which include extracting a dataset that comprise one or more of inventory information, traffic information, or shrink information associated with a retailer; formatting the dataset that is extracted, wherein a portion of the formatted dataset is subdivided into a training dataset and testing dataset; generating one or more shrink features from the training dataset by identifying attributes within the training dataset that are associated with retail theft; and storing shrink predictions generated from the hybrid machine learning model. As drafted, this is, under its broadest reasonable interpretation, within the Abstract idea grouping of “Methods of Organizing Human Activities”- fundamental economic principles or practices and commercial or legal interaction (very specifically advertising, marketing or sales activities ... business relation.. The recitation of “database”, “memory”, “processor”, and “computer readable medium”, provide nothing in the claim elements to preclude the step from being Methods of Organizing Human Activities. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. The claims primarily recite the additional element of using computer components to perform each step. The “database”, “memory”, “processor”, and “computer readable medium” is recited at a high-level of generality, such that it amounts no more than mere instructions to apply the exception using a computer component. See MPEP 2106.05(f). Regarding the additional element of machine learning - testing combinations of plurality of machine learning algorithms based on the one or more shrink features such that each combination of the plurality of machine learning algorithms outputs a predictive result associated with the retail theft, wherein the testing of combinations of the plurality of machine learning algorithms comprises feeding outputs of a first machine learning algorithm as inputs to a second machine learning algorithm in a particular order; selecting two or more machine learning algorithms from the plurality of machine learning algorithms to form a hybrid machine learning model, wherein the hybrid machine learning model feeds the output of a first selected machine learning algorithm as inputs to a second selected machine learning algorithm in a particular order and the hybrid machine learning model provides a lowest margin of error than a margin of error achieved from any one of the plurality of machine learning algorithms individually. The specification discloses the machine learning at a high-level of generality, providing examples of different techniques that may be applied. The general use of a machine learning technique does not provide a meaningful limitation to transform the abstract idea into a practical application. Therefore, currently, the machine learning is solely used a tool to perform the instructions of the abstract idea. Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims also fail to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, and/or an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. See 84 Fed. Reg. 55. In particular, there is a lack of improvement to a computer or technical field in data analytics. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “database”, “memory”, “processor”, and “computer readable medium” is insufficient to amount to significantly more. (See MPEP 2106.05(f) – Mere Instructions to Apply an Exception – “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. With regards to extracting and testing data and step 2B, it is M2106.05(d)- Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information) and Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). Regarding the additional element of machine learning and Step 2B- the specification discloses the machine learning at a high-level of generality, providing examples of different techniques (linear regression, logistic regression, decision tree, random forest, dimensionality reduction algorithms, or gradient boosting algorithms) that may be applied. The general use of a machine learning technique does not provide a meaningful limitation to transform the abstract idea into a practical application. Therefore, currently, the machine learning is solely used a tool to perform the instructions of the abstract idea. Examiner concludes that the additional elements in combination fail to amount to significantly more than the abstract idea based on findings that each element merely performs the same function(s) in combination as each element performs separately. The claim is not patent eligible. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. Dependent Claims 2-4,6, 8-10, 12 and 14-17, 18-21 recite process the dataset in order to expand granularity of information associated with the one or more of inventory information, the traffic information, or the shrink information for the retailer included in the one or more databases; identify data points within the dataset that identify one or more of types of items that the retailer has identified as high priority items; and allocate weights to each of the one or more types of items based on input from the retailer; determine a pattern during a time period that directly correlates against increase in the retail theft for the time period; determine the margin of error that is achieved from the plurality of machine learning algorithms against the testing dataset that reflects the actual shrink for a time period, wherein the margin of error comprises one or both of mean absolute error or root mean square error for the time period; modify at least one of the two or more machine learning algorithms that are selected for the hybrid machine learning model; select two or more machine learning algorithms from the plurality of machine learning algorithms to form a hybrid machine learning model comprises an order to apply the two or more selected machine learning algorithms; wherein the one or more shrink databases include weather information; wherein the shrink predictions generated from the hybrid machine learning model include predictions of risk factors and likelihood of an item being subject to retail theft for any particular day or time; and further narrowing the abstract idea. These recited limitations in the dependent claims do not amount to significantly more than the above-identified judicial exceptions in Claims 1, 7 and 13. Regarding Claim 2, 8, 14 and 20 and the additional element of “database” – it is storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). Regarding claims 4, 6, 10,12, and 16,19, 21 and the additional element of machine learning - the specification discloses the machine learning at a high-level of generality, providing examples of different techniques that may be applied. The general use of a machine learning technique does not provide a meaningful limitation to transform the abstract idea into a practical application. Therefore, currently, the machine learning is solely used a tool to perform the instructions of the abstract idea. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. * Claims 1-2, 4, 6-8, 10, 12-14,16, and 18-21 are rejected under 35 U.S.C. 103 as being unpatentable over Lobo, US Publication No. 20190027003A1, [hereinafter Lobo], Austin et al., US Publication No. 20210174130A1, [hereinafter Austin], and in further view of Lei et al., US Publication No. 20190188536A1, [hereinafter Lei]. Regarding Claim 1, Lobo teaches An apparatus for performing data analytics using machine learning, the apparatus comprising: a memory configured to store instructions; and a processor communicatively coupled with the memory, the processor configured to execute the instructions to: extract a dataset from one or more shrink databases stored in the memory, wherein the one or more shrink databases comprise one or more of inventory information, traffic information, or shrink information associated with a retailer (Lobo Par. 5-6-“ n an embodiment, a retail shrinkage activity prediction and identification system includes: a sensor control system, a first shrinkage database, a second shrinkage database, an analytics engine, and a machine learning engine. The sensor control system is communicatively coupled with a plurality of sensors arranged in a retail environment. The sensor control system is configured to control a setting of each of the plurality of sensors. The first shrinkage database includes retail shrinkage data for at least the retail environment. The retail shrinkage data includes at least one item at high risk for shrinkage or at least one time at high risk for shrinkage activity. The second shrinkage database includes external data related to shrinkage in a geographic area of the retail environment. The analytics engine is communicatively coupled with: the first shrinkage database to access the retail shrinkage data, the second shrinkage database to access the external data, and the sensor control system to receive real-time sensor data from the plurality of sensors. The analytics engine is configured to compare the real-time sensor data with the external data to identify a high shrinkage risk situation. If a high shrinkage risk situation is identified, the analytics engine will: issue an alert, cause the sensor control system to alter the setting of at least one of the plurality of sensors, and update at least one of the first shrinkage database or the second shrinkage database. The machine learning engine is communicatively coupled with the first shrinkage database, the second shrinkage database, and the analytics engine to use the retail shrinkage data, the external data, and the issuance of an alert to conduct predictive modeling and cause the analytics engine to issue an alert if the predictive modeling determines that a high shrinkage risk situation is likely to occur.”; Par. 43-44); format the dataset that is extracted from the one or more shrink databases, wherein a portion of the formatted dataset is subdivided into a training dataset and testing dataset (Lobo Par. 26-“ External data 132 can additionally or alternatively include data or information shared among retailers or business associations in particular industries and/or geographic areas. Any outside public, private, or government database which provides information potentially relevant to shrinkage can be used. In some embodiments, external data 132 is provided, selected, filtered, and/or applied according to a geographic area of relevance to a particular retailer, store, operating area, or other characteristic.”[ filtering equates to dividing the data]; Par. 6; Par. 42); generate one or more shrink features from the training dataset by identifying attributes within the training dataset that are associated with retail theft (Lobo Par. 8-“ In an embodiment, a method of predicting or identifying retail shrinkage activity includes: accessing retail shrinkage data comprising at least one item at high risk for shrinkage or at least one time at high risk for shrinkage activity in a retail environment; accessing external data related to shrinkage in a geographic area of the retail environment; receiving real-time sensor data from a plurality of sensors arranged in the retail environment; comparing the real-time sensor data with the external data to identify a high shrinkage risk situation and if a high shrinkage risk situation is identified, issuing an alert, causing a sensor control system to alter a setting of at least one of the plurality of sensors, and updating at least one of the retail shrinkage data or the external data; conducting predictive modeling using the retail shrinkage data, the external data, and the issuance of an alert; and issuing an alert if the predictive modeling determines that a high shrinkage risk situation is likely to occur. “Par. 24-“ Sensors 104 can include a plurality of sensors. The plurality of sensors 104 can include any of a surveillance camera, an optical sensor, a motion detection sensor, a temperature sensor, an infrared sensor, a microphone, or a pressure sensor, for example. Settings 106 of each of the sensors 104 can include an activation, a direction, an angle, a zoom level, a location or a sensing area, for example. Real-time sensor data 108 can include image data, such as an image of clothing or facial features. Real-time sensor data 108 can also include data related to movements of individuals or groups, congregating of individuals, temperature profile data, infrared data, sound recording data, pressure data, time of purchase data, length of trip data, or other potentially relevant tracked information. “Par. 42-“ This information is fed into the machine learning engine 550 for training at 529 and predictions are made by the machine learning engine 550 at 531.”); ... shrink features... associated with retail theft (Lobo Par. 3; Par. 8-“ In an embodiment, a method of predicting or identifying retail shrinkage activity includes: accessing retail shrinkage data comprising at least one item at high risk for shrinkage or at least one time at high risk for shrinkage activity in a retail environment”); ... shrink prediction... (Lobo Par. 5-6-“ In an embodiment, a retail shrinkage activity prediction and identification system includes: a sensor control system, a first shrinkage database, a second shrinkage database, an analytics engine, and a machine learning engine. The sensor control system is communicatively coupled with a plurality of sensors arranged in a retail environment. The sensor control system is configured to control a setting of each of the plurality of sensors. The first shrinkage database includes retail shrinkage data for at least the retail environment. The retail shrinkage data includes at least one item at high risk for shrinkage or at least one time at high risk for shrinkage activity.”) Lobo teaches shrink modelling utilizing machine learning and the machine learning modelling is improved upon by Austin: train a plurality of machine learning algorithms using the training dataset (Austin Par. 26-31- The system 200 and method implement automatic machine learning processes which selectively utilize various components of different crowd-submitted solutions for optimally solving machine learning problems. The approach exploits the following components (of which 5 components are listed here, but other components could also be possible) for improving the prediction of machine learning algorithms: 1) utilizing additional data to help in the prediction, 2) designing more predictive derived data, also referred to as features or a data pipeline, from existing data, 3) utilizing different or more predictive algorithms, and/or 4) optimizing the parameters in a given algorithm, and/or 5) utilizing different techniques for validating the model performance (e.g. different cross-validation approaches such as form a predictive model using the full training set, or form the model by breaking the training set into sub-training sets, and forming individual models which are then “combined” to an overall model).” test combinations of plurality of machine learning algorithms based on the one or more ...features such that each combination outputs a predictive result... (Austin Par. 52-“ The hybrid ML system 204 includes a ranking process 208, a select hybrid components process 210, training data 212 and test data 214. In the illustrated embodiment, the hybrid ML system 204 includes a leaderboard 216 and an application programming interface (API) 218. The hybrid ML system 204 operates to develop one or more ML solutions to ML problems such as business problem 220. The hybrid ML system 204 in exemplary embodiments is implemented using a processing system including at least one processor and a memory storing instructions to control operations of the processing system. The hybrid ML system 204 may receive information about the business problem 220 including the dataset and scoring method from the user 202 and develop an initial ML solution from these inputs. In alternative embodiments, the hybrid ML system 204 may receive the initial ML solution along with the information about the business problem 220 and the dataset and the scoring method from the user 202.) select two or more machine learning algorithms from the plurality of machine learning algorithms to form a hybrid machine learning model, wherein the hybrid machine learning model …provides a lower margin of error than the margin of error achieved from any one of the plurality of machine learning algorithms individually( Austin Par. 54-57-“ In some embodiments, the ranking process 208 generates a ranked list of ML models received by or developed by the hybrid ML system 204. The ranked list of ML models may be displayed on leaderboard 216. FIGS. 2B shows examples of leaderboards 232, 234 for a particular machine learning problem. The select hybrid components process 210 receives other ML solutions from the other users 206 and selects components of the other ML solutions to identify a best ML solution. The select hybrid components process 210 provides training data 212 to train each respective ML solution to train the ML solution. The select hybrid components process 210 provides test data 214 to each respective ML solution to test the ML solution. In exemplary embodiments, training data 212 includes all information, including a target value to predict. In some exemplary embodiments, the test data 214 omits the target value. For example, in a sales forecasting problem, the competing models may be given as training data sales data for January and February of a given year but not for March of the year. The test data includes March data and the model is evaluated by the ranking process 208 by its accuracy for the prediction for March sales. The accuracy of predicting the target value using the test data 214 when processing the test data is the basis for scoring the model. A more accurate prediction results in a lower log loss score. The hybrid ML system 204 addresses some of the challenges in creating the best machine learning by openly exposing machine learning problems to a cross discipline crowd of experts represented by other users 206. The hybrid ML system 204 enables collecting from the other users 206 individual proposed solutions such as including or consisting of datasets 222, feature sets 224, machine learning algorithms 226, parameter sets 228 and other available information 230. Each proposed solution received from the other users 206 is individually ranked by the ranking process 208 and reported on the leaderboard 216. The hybrid ML”); and store, in the memory, ... predictions generated from the hybrid machine learning model. (Austin Par.98-99-“ In the competition environment illustrated in FIG. 2D, the judge 272 plays events according to their timestamp information. Events are retrieved from a data store and, according to the time stamp, are presented to the competitor 274. When a score triggering event occurs, such as a retail point-of-sale card swipe in stream 280, the judge 272 pauses all data feeds and waits a specified time, such as 250 ms, to allow for the competitor 274 to provide its score. Once the score of the competitor 274 has been received, the judge 272 continues to replay events until the next triggering event occurs and the process repeats. If the competitor process is unable to provide its score within the time limit, the judge 272 marks the score as timed out and penalizes the competitor 274. In this way, it is impossible for the competitor 274 to leak future data into their risk score and cheat by having access to information that would have not been available in a real-world situation.”) Lobo and Austin are directed to retail machine learning modelling. Austin improves upon the machine learning techniques. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon machine modelling analysis of Lobo, as taught by Austin, by utilizing hybrid modelling techniques with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Lobo with the motivation of improving the prediction of machine learning algorithms: (Austin Par. 26). Lobo in view of Austin fail to teach the following feature taught by Lei: wherein the testing of combinations of the plurality of machine learning algorithms comprises feeding outputs of a first machine learning algorithm as inputs to a second machine learning algorithm in a particular order (Lei Par. 78-“ As disclosed above, the output of the functionality of FIG. 2 can be used as input to the functionality of FIG. 5 to generate a demand forecast in one embodiment. For example, in one embodiment of FIG. 5 three algorithms are considered (e.g., linear regression, SVM, ANNs), and assume FIG. 2 generates three optimized feature sets. “;Par. 62-“ In one embodiment, multiple rounds of the functionality of FIG. 2 are executed in order to produce multiple optimized feature sets. Each feature set can be used as input into a forecasting algorithm to generate forecasting trained models. The multiple trained models can then be aggregated to generate a demand forecast, as disclosed in detail below in conjunction with FIG. 5. The output of the functionality of FIG. 2 is one or more optimized feature sets.”; Par. 65-“ In embodiments disclosed above, where one or more optimized feature sets are generated using the functionality of FIG. 2, embodiments use the optimized feature sets as input to forecasting algorithms to generate forecasting models. FIG. 5 is a flow diagram of the functionality of promotion effects module 16 of FIG. 1 when determining promotion effects at an aggregate level using multiple trained models in accordance with one embodiment. The multiple models can be generated using the functionality of FIG. 2.; Par. 72-73; Abstract). hybrid machine learning model…feeds the output of a first selected machine learning algorithm as inputs to a second selected machine learning algorithm in a particular order…(Lei Par. 78-“ As disclosed above, the output of the functionality of FIG. 2 can be used as input to the functionality of FIG. 5 to generate a demand forecast in one embodiment. For example, in one embodiment of FIG. 5 three algorithms are considered (e.g., linear regression, SVM, ANNs), and assume FIG. 2 generates three optimized feature sets.”; Par. 19; Par. 62-“ In one embodiment, multiple rounds of the functionality of FIG. 2 are executed in order to produce multiple optimized feature sets. Each feature set can be used as input into a forecasting algorithm to generate forecasting trained models. The multiple trained models can then be aggregated to generate a demand forecast, as disclosed in detail below in conjunction with FIG. 5. The output of the functionality of FIG. 2 is one or more optimized feature sets.”; Par. 65-“ In embodiments disclosed above, where one or more optimized feature sets are generated using the functionality of FIG. 2, embodiments use the optimized feature sets as input to forecasting algorithms to generate forecasting models. FIG. 5 is a flow diagram of the functionality of promotion effects module 16 of FIG. 1 when determining promotion effects at an aggregate level using multiple trained models in accordance with one embodiment. The multiple models can be generated using the functionality of FIG. 2.; Par. 72-73; Abstract) Lobo, Austin and Lei are directed to machine learning modelling. Austin and Lei improve upon the machine learning techniques. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon modelling analysis of Lobo in view of Austin, as taught by Lei, by utilizing modelling techniques with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Lobo in view of Austin with the motivation of improving forecasting (Lei Par. 30). Regarding Claim 2, Claim 8 and Claim 14, Lobo in view of Austin in further view of Lei teach The apparatus of claim 1, wherein the instructions to format the dataset that is extracted from the one or more shrink databases further comprises instructions for:..., The method of claim 7, wherein formatting the dataset that is extracted from the one or more shrink databases further comprises:.... and The non-transitory computer readable medium of claim 13, wherein the code for formatting the dataset that is extracted from the one or more shrink databases further comprises code for:... process the dataset in order to expand granularity of information associated with the one or more of inventory information, the traffic information, or the shrink information for the retailer included in the one or more databases (Lobo Par. 25-27-“ First shrinkage database 120 includes retail shrinkage data 122 for the retail environment 102. The retail shrinkage data 122 can include one or more items 124 at high risk for shrinkage. These may be items that have a history of being stolen frequently, are particularly valuable, or are known to be related to frequent shrinkage-related issues. Items that are small, easy to conceal, or difficult to track could also be deemed items 124 at high risk for shrinkage. The retail shrinkage data 122 can include one or more times 126 at high risk for shrinkage activity. These times 126 can include times of day when shrinkage is most common, times of the week common for shrinkage, times of the year common for shrinkage, or times of expected shrinkage related to holidays and local activities. Further, certain items can be correlated to certain times to identify high shrinkage risk. In some embodiments, at least one item 124 at high risk or at least one time 126 at high risk for shrinkage is part of the retail shrinkage data 122. Second shrinkage database 130 contains external data 132 related to shrinkage in a geographic area of a retail environment 102. External data 132 can include publicly available data, criminal report data and/or public safety notice data, for example. External data 132 can additionally or alternatively include data or information shared among retailers or business associations in particular industries and/or geographic areas. Any outside public, private, or government database which provides information potentially relevant to shrinkage can be used. In some embodiments, external data 132 is provided, selected, filtered, and/or applied according to a geographic area of relevance to a particular retailer, store, operating area, or other characteristic.”); Lobo in view of Austin fail to teach the following feature taught by Lei: identify data points within the dataset that identify one or more of types of items that the retailer has identified as high priority items; and allocate weights to each of the one or more types of items based on input from the retailer (Lei Par. 73-74-“ Training a model using a machine learning algorithm, in general, is a way to describe how the output of the model will be calculated based on the input feature set. For example, for a linear regression model, the forecast can be modeled as follows: forecast=base demand*seasonality*promotion 1*promotion 2*promotion effect 10. For different training methods, the output will be different. For example: (1) for linear regression, the training will produce the estimations for seasonality, promotion effect 1 . . . promotion effect 10; (2) for the SVM, the training will produce the “support vector” which is the set of the input data points associated with some weight; (3) for the ANN, the training output will be the final activation function and corresponding weight for each nodes.”). Lobo, Austin and Lei are directed to machine learning modelling. Austin and Lei improve upon the machine learning techniques. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon modelling analysis of Lobo in view of Austin, as taught by Lei, by utilizing modelling techniques with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Lobo in view of Austin with the motivation of improving forecasting (Lei Par. 30). Regarding Claim 4, Claim 10 and Claim 16, Lobo in view of Austin in further view of Lei each The apparatus of claim 1, wherein the instructions to test the combination of the plurality of machine learning algorithms based on the one or more shrink features further comprises instructions to:..., The method of claim 7, wherein testing the combination of the plurality of machine learning algorithms based on the one or more shrink features further comprises:.... and The non-transitory computer readable medium of claim 13, wherein the code for testing the combination of the plurality of machine learning algorithms based on the one or more shrink features further comprises code for:... Lobo in view of Austin in further view of Lei fail to teach the following feature taught by Lei: determine the margin of error that is achieved from the combinations of plurality of machine learning algorithms against the testing dataset that reflects the actual shrink for a time period, wherein the margin of error comprises one or both of mean absolute error or root mean square error for the time period (Lei Par. 74-“ At 508, each model is validated and errors are determined using the test set. For each model M(i), embodiments apply the test set T(i) to predict the results and calculate the root-mean-square error RMSE(i). For example, for a test data set i, in which there are 10 data points x1, . . . x10, embodiments predict the output of these 10 points based on the trained model. If the output is P1, . . . P10, then the RMSE is calculated as follows: rmse = ( ∑ n = 1 10  ( xi - pi ) 2 ) )  /  10”). Lobo, Austin and Lei are directed to machine learning modelling. Austin and Lei improve upon the machine learning techniques. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon modelling analysis of Lobo in view of Austin, as taught by Lei, by utilizing modelling techniques with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Lobo in view of Austin with the motivation of improving forecasting (Lei Par. 30). Regarding Claim 5, Claim 11 and Claim 17, - Cancelled. Regarding Claim 6, Claim 12 and Claim 18, Lobo in view of Austin in further view of Lei teach The apparatus of claim 1, wherein the instructions to select the two or more machine learning algorithms from the plurality of machine learning algorithms to form the hybrid machine learning model further include instructions to:..., The method of claim 7, wherein selecting the two or more machine learning algorithms from the plurality of machine learning algorithms to form the hybrid machine learning model further comprises:.... and The non-transitory computer readable medium of claim 13, wherein the code for selecting the two or more machine learning algorithms from the plurality of machine learning algorithms to form the hybrid machine learning model further comprises code for:... Lobo teaches shrink modelling and the modelling is expounded upon by Austin: modify at least one of the two or more machine learning algorithms that are selected for the hybrid machine learning model. (Austin Par. 49-51-“ Third, the data scientist may use differing techniques for validating a model will perform as expected when predicting events in the future. Examples of validation techniques include cross-validation, leave-one-out validation, and bootstrap validation. In n-fold cross validation, and integer n is chosen (usually 5 or 10), the data scientist splits the training dataset into n equal sized pieces, trains the algorithm on n-1 of those pieces, and then validates the results on the remaining, held-out piece. Leave-one-out validation is cross-validation when n is chosen to be the number of rows in the data set. In bootstrap validation, training data are sampled from the original dataset with replacement, and model validation takes place on data not in the sample. Differing choices of validation may provide better real-world performance approximation, depending on the problem at hand. Data scientists may also try different machine learning algorithms. Examples of machine learning algorithms include straight linear prediction algorithms, a random forest algorithm, smart decision tree algorithms, and neural network solutions. Most data scientists are not experts at all of these and tend to have favorites or preferred algorithms, meaning some possible algorithms may not be tried during development of an initial ML solution. Additionally, the platform or the data scientist may submit or evaluate machine learning algorithm solutions from AI robots. Also, optimizing an algorithm requires expertise and use of different parameters.”) Lobo and Austin are directed to retail machine learning modelling. Austin improves upon the machine learning techniques. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon machine modelling analysis of Lobo, as taught by Austin, by utilizing hybrid modelling techniques with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Lobo with the motivation of improving the prediction of machine learning algorithms: (Austin Par. 26). Regarding Claim 7, Lobo teaches A method for performing data analytics using machine learning comprising: extracting a dataset from one or more shrink databases stored in a memory, wherein the one or more shrink databases comprise one or more of inventory information, traffic information, or shrink information associated with a retailer (Lobo Par. 5-6-“ n an embodiment, a retail shrinkage activity prediction and identification system includes: a sensor control system, a first shrinkage database, a second shrinkage database, an analytics engine, and a machine learning engine. The sensor control system is communicatively coupled with a plurality of sensors arranged in a retail environment. The sensor control system is configured to control a setting of each of the plurality of sensors. The first shrinkage database includes retail shrinkage data for at least the retail environment. The retail shrinkage data includes at least one item at high risk for shrinkage or at least one time at high risk for shrinkage activity. The second shrinkage database includes external data related to shrinkage in a geographic area of the retail environment. The analytics engine is communicatively coupled with: the first shrinkage database to access the retail shrinkage data, the second shrinkage database to access the external data, and the sensor control system to receive real-time sensor data from the plurality of sensors. The analytics engine is configured to compare the real-time sensor data with the external data to identify a high shrinkage risk situation. If a high shrinkage risk situation is identified, the analytics engine will: issue an alert, cause the sensor control system to alter the setting of at least one of the plurality of sensors, and update at least one of the first shrinkage database or the second shrinkage database. The machine learning engine is communicatively coupled with the first shrinkage database, the second shrinkage database, and the analytics engine to use the retail shrinkage data, the external data, and the issuance of an alert to conduct predictive modeling and cause the analytics engine to issue an alert if the predictive modeling determines that a high shrinkage risk situation is likely to occur.”; Par. 43-44); formatting the dataset that is extracted from the one or more shrink databases, wherein a portion of the formatted dataset is subdivided into a training dataset and testing dataset (Lobo Par. 26-“ External data 132 can additionally or alternatively include data or information shared among retailers or business associations in particular industries and/or geographic areas. Any outside public, private, or government database which provides information potentially relevant to shrinkage can be used. In some embodiments, external data 132 is provided, selected, filtered, and/or applied according to a geographic area of relevance to a particular retailer, store, operating area, or other characteristic.”[ filtering equates to dividing the data]; Par. 6; Par. 42); generating one or more shrink features from the training dataset by identifying attributes within the training dataset that are associated with retail theft (Lobo Par. 8-“ In an embodiment, a method of predicting or identifying retail shrinkage activity includes: accessing retail shrinkage data comprising at least one item at high risk for shrinkage or at least one time at high risk for shrinkage activity in a retail environment; accessing external data related to shrinkage in a geographic area of the retail environment; receiving real-time sensor data from a plurality of sensors arranged in the retail environment; comparing the real-time sensor data with the external data to identify a high shrinkage risk situation and if a high shrinkage risk situation is identified, issuing an alert, causing a sensor control system to alter a setting of at least one of the plurality of sensors, and updating at least one of the retail shrinkage data or the external data; conducting predictive modeling using the retail shrinkage data, the external data, and the issuance of an alert; and issuing an alert if the predictive modeling determines that a high shrinkage risk situation is likely to occur. “Par. 24-“ Sensors 104 can include a plurality of sensors. The plurality of sensors 104 can include any of a surveillance camera, an optical sensor, a motion detection sensor, a temperature sensor, an infrared sensor, a microphone, or a pressure sensor, for example. Settings 106 of each of the sensors 104 can include an activation, a direction, an angle, a zoom level, a location or a sensing area, for example. Real-time sensor data 108 can include image data, such as an image of clothing or facial features. Real-time sensor data 108 can also include data related to movements of individuals or groups, congregating of individuals, temperature profile data, infrared data, sound recording data, pressure data, time of purchase data, length of trip data, or other potentially relevant tracked information. “Par. 42-“ This information is fed into the machine learning engine 550 for training at 529 and predictions are made by the machine learning engine 550 at 531.”); ... shrink features... associated with retail theft (Lobo Par. 3; Par. 8-“ In an embodiment, a method of predicting or identifying retail shrinkage activity includes: accessing retail shrinkage data comprising at least one item at high risk for shrinkage or at least one time at high risk for shrinkage activity in a retail environment”); ... shrink prediction... (Lobo Par. 5-6-“ In an embodiment, a retail shrinkage activity prediction and identification system includes: a sensor control system, a first shrinkage database, a second shrinkage database, an analytics engine, and a machine learning engine. The sensor control system is communicatively coupled with a plurality of sensors arranged in a retail environment. The sensor control system is configured to control a setting of each of the plurality of sensors. The first shrinkage database includes retail shrinkage data for at least the retail environment. The retail shrinkage data includes at least one item at high risk for shrinkage or at least one time at high risk for shrinkage activity.”) Lobo teaches shrink modelling utilizing machine learning and the machine learning modelling is improved upon by Austin: training a plurality of machine learning algorithms using the training dataset (Austin Par. 26-31- The system 200 and method implement automatic machine learning processes which selectively utilize various components of different crowd-submitted solutions for optimally solving machine learning problems. The approach exploits the following components (of which 5 components are listed here, but other components could also be possible) for improving the prediction of machine learning algorithms: 1) utilizing additional data to help in the prediction, 2) designing more predictive derived data, also referred to as features or a data pipeline, from existing data, 3) utilizing different or more predictive algorithms, and/or 4) optimizing the parameters in a given algorithm, and/or 5) utilizing different techniques for validating the model performance (e.g. different cross-validation approaches such as form a predictive model using the full training set, or form the model by breaking the training set into sub-training sets, and forming individual models which are then “combined” to an overall model).” testing combinations of plurality of machine learning algorithms based on the one or more ...features such that each combination outputs a predictive result... (Austin Par. 52-“ The hybrid ML system 204 includes a ranking process 208, a select hybrid components process 210, training data 212 and test data 214. In the illustrated embodiment, the hybrid ML system 204 includes a leaderboard 216 and an application programming interface (API) 218. The hybrid ML system 204 operates to develop one or more ML solutions to ML problems such as business problem 220. The hybrid ML system 204 in exemplary embodiments is implemented using a processing system including at least one processor and a memory storing instructions to control operations of the processing system. The hybrid ML system 204 may receive information about the business problem 220 including the dataset and scoring method from the user 202 and develop an initial ML solution from these inputs. In alternative embodiments, the hybrid ML system 204 may receive the initial ML solution along with the information about the business problem 220 and the dataset and the scoring method from the user 202.) selecting two or more machine learning algorithms from the plurality of machine learning algorithms to form a hybrid machine learning model, wherein the hybrid machine learning model …and the hybrid machine learning model provides a lower margin of error than the margin of error achieved from any one of the plurality of machine learning algorithms individually( Austin Par. 54-57-“ In some embodiments, the ranking process 208 generates a ranked list of ML models received by or developed by the hybrid ML system 204. The ranked list of ML models may be displayed on leaderboard 216. FIGS. 2B shows examples of leaderboards 232, 234 for a particular machine learning problem. The select hybrid components process 210 receives other ML solutions from the other users 206 and selects components of the other ML solutions to identify a best ML solution. The select hybrid components process 210 provides training data 212 to train each respective ML solution to train the ML solution. The select hybrid components process 210 provides test data 214 to each respective ML solution to test the ML solution. In exemplary embodiments, training data 212 includes all information, including a target value to predict. In some exemplary embodiments, the test data 214 omits the target value. For example, in a sales forecasting problem, the competing models may be given as training data sales data for January and February of a given year but not for March of the year. The test data includes March data and the model is evaluated by the ranking process 208 by its accuracy for the prediction for March sales. The accuracy of predicting the target value using the test data 214 when processing the test data is the basis for scoring the model. A more accurate prediction results in a lower log loss score. The hybrid ML system 204 addresses some of the challenges in creating the best machine learning by openly exposing machine learning problems to a cross discipline crowd of experts represented by other users 206. The hybrid ML system 204 enables collecting from the other users 206 individual proposed solutions such as including or consisting of datasets 222, feature sets 224, machine learning algorithms 226, parameter sets 228 and other available information 230. Each proposed solution received from the other users 206 is individually ranked by the ranking process 208 and reported on the leaderboard 216. The hybrid ML”); feeds the output of a first selected machine learning algorithm as inputs to a second selected machine learning algorithm in a particular order and the hybrid machine learning model and storing, in the memory, ... predictions generated from the hybrid machine learning model. (Austin Par.98-99-“ In the competition environment illustrated in FIG. 2D, the judge 272 plays events according to their timestamp information. Events are retrieved from a data store and, according to the time stamp, are presented to the competitor 274. When a score triggering event occurs, such as a retail point-of-sale card swipe in stream 280, the judge 272 pauses all data feeds and waits a specified time, such as 250 ms, to allow for the competitor 274 to provide its score. Once the score of the competitor 274 has been received, the judge 272 continues to replay events until the next triggering event occurs and the process repeats. If the competitor process is unable to provide its score within the time limit, the judge 272 marks the score as timed out and penalizes the competitor 274. In this way, it is impossible for the competitor 274 to leak future data into their risk score and cheat by having access to information that would have not been available in a real-world situation.”) Lobo and Austin are directed to retail machine learning modelling. Austin improves upon the machine learning techniques. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon machine modelling analysis of Lobo, as taught by Austin, by utilizing hybrid modelling techniques with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Lobo with the motivation of improving the prediction of machine learning algorithms: (Austin Par. 26). Lobo in view of Austin fail to teach the following feature taught by Lei: wherein the testing of combinations of the plurality of machine learning algorithms comprises feeding outputs of a first machine learning algorithm as inputs to a second machine learning algorithm in a particular order (Lei Par. 78; Par. 62-“ In one embodiment, multiple rounds of the functionality of FIG. 2 are executed in order to produce multiple optimized feature sets. Each feature set can be used as input into a forecasting algorithm to generate forecasting trained models. The multiple trained models can then be aggregated to generate a demand forecast, as disclosed in detail below in conjunction with FIG. 5. The output of the functionality of FIG. 2 is one or more optimized feature sets.”; Par. 65-“ In embodiments disclosed above, where one or more optimized feature sets are generated using the functionality of FIG. 2, embodiments use the optimized feature sets as input to forecasting algorithms to generate forecasting models. FIG. 5 is a flow diagram of the functionality of promotion effects module 16 of FIG. 1 when determining promotion effects at an aggregate level using multiple trained models in accordance with one embodiment. The multiple models can be generated using the functionality of FIG. 2.; Par. 72-73; Abstract). hybrid machine learning model…feeds the output of a first selected machine learning algorithm as inputs to a second selected machine learning algorithm in a particular order…(Lei Par. 78; Par. 19; Par. 62-“ In one embodiment, multiple rounds of the functionality of FIG. 2 are executed in order to produce multiple optimized feature sets. Each feature set can be used as input into a forecasting algorithm to generate forecasting trained models. The multiple trained models can then be aggregated to generate a demand forecast, as disclosed in detail below in conjunction with FIG. 5. The output of the functionality of FIG. 2 is one or more optimized feature sets.”; Par. 65-“ In embodiments disclosed above, where one or more optimized feature sets are generated using the functionality of FIG. 2, embodiments use the optimized feature sets as input to forecasting algorithms to generate forecasting models. FIG. 5 is a flow diagram of the functionality of promotion effects module 16 of FIG. 1 when determining promotion effects at an aggregate level using multiple trained models in accordance with one embodiment. The multiple models can be generated using the functionality of FIG. 2.; Par. 72-73; Abstract) Lobo, Austin and Lei are directed to machine learning modelling. Austin and Lei improve upon the machine learning techniques. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon modelling analysis of Lobo in view of Austin, as taught by Lei, by utilizing modelling techniques with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Lobo in view of Austin with the motivation of improving forecasting (Lei Par. 30). Regarding Claim 13, Lobo teaches A non-transitory computer readable medium for performing data analytics using machine learning, comprising code for: extracting a dataset from one or more shrink databases stored in a memory, wherein the one or more shrink databases comprise one or more of inventory information, traffic information, or shrink information associated with a retailer; (Lobo Par. 5-6-“ n an embodiment, a retail shrinkage activity prediction and identification system includes: a sensor control system, a first shrinkage database, a second shrinkage database, an analytics engine, and a machine learning engine. The sensor control system is communicatively coupled with a plurality of sensors arranged in a retail environment. The sensor control system is configured to control a setting of each of the plurality of sensors. The first shrinkage database includes retail shrinkage data for at least the retail environment. The retail shrinkage data includes at least one item at high risk for shrinkage or at least one time at high risk for shrinkage activity. The second shrinkage database includes external data related to shrinkage in a geographic area of the retail environment. The analytics engine is communicatively coupled with: the first shrinkage database to access the retail shrinkage data, the second shrinkage database to access the external data, and the sensor control system to receive real-time sensor data from the plurality of sensors. The analytics engine is configured to compare the real-time sensor data with the external data to identify a high shrinkage risk situation. If a high shrinkage risk situation is identified, the analytics engine will: issue an alert, cause the sensor control system to alter the setting of at least one of the plurality of sensors, and update at least one of the first shrinkage database or the second shrinkage database. The machine learning engine is communicatively coupled with the first shrinkage database, the second shrinkage database, and the analytics engine to use the retail shrinkage data, the external data, and the issuance of an alert to conduct predictive modeling and cause the analytics engine to issue an alert if the predictive modeling determines that a high shrinkage risk situation is likely to occur.”; Par. 43-44); formatting the dataset that is extracted from the one or more shrink databases, wherein a portion of the formatted dataset is subdivided into a training dataset and testing dataset (Lobo Par. 26-“ External data 132 can additionally or alternatively include data or information shared among retailers or business associations in particular industries and/or geographic areas. Any outside public, private, or government database which provides information potentially relevant to shrinkage can be used. In some embodiments, external data 132 is provided, selected, filtered, and/or applied according to a geographic area of relevance to a particular retailer, store, operating area, or other characteristic.”[ filtering equates to dividing the data]; Par. 6; Par. 42); generating one or more shrink features from the training dataset by identifying attributes within the training dataset that are associated with retail theft (Lobo Par. 8-“ In an embodiment, a method of predicting or identifying retail shrinkage activity includes: accessing retail shrinkage data comprising at least one item at high risk for shrinkage or at least one time at high risk for shrinkage activity in a retail environment; accessing external data related to shrinkage in a geographic area of the retail environment; receiving real-time sensor data from a plurality of sensors arranged in the retail environment; comparing the real-time sensor data with the external data to identify a high shrinkage risk situation and if a high shrinkage risk situation is identified, issuing an alert, causing a sensor control system to alter a setting of at least one of the plurality of sensors, and updating at least one of the retail shrinkage data or the external data; conducting predictive modeling using the retail shrinkage data, the external data, and the issuance of an alert; and issuing an alert if the predictive modeling determines that a high shrinkage risk situation is likely to occur. “Par. 24-“ Sensors 104 can include a plurality of sensors. The plurality of sensors 104 can include any of a surveillance camera, an optical sensor, a motion detection sensor, a temperature sensor, an infrared sensor, a microphone, or a pressure sensor, for example. Settings 106 of each of the sensors 104 can include an activation, a direction, an angle, a zoom level, a location or a sensing area, for example. Real-time sensor data 108 can include image data, such as an image of clothing or facial features. Real-time sensor data 108 can also include data related to movements of individuals or groups, congregating of individuals, temperature profile data, infrared data, sound recording data, pressure data, time of purchase data, length of trip data, or other potentially relevant tracked information. “Par. 42-“ This information is fed into the machine learning engine 550 for training at 529 and predictions are made by the machine learning engine 550 at 531.”); ... shrink features... associated with retail theft (Lobo Par. 3; Par. 8-“ In an embodiment, a method of predicting or identifying retail shrinkage activity includes: accessing retail shrinkage data comprising at least one item at high risk for shrinkage or at least one time at high risk for shrinkage activity in a retail environment”); ... shrink prediction... (Lobo Par. 5-6-“ In an embodiment, a retail shrinkage activity prediction and identification system includes: a sensor control system, a first shrinkage database, a second shrinkage database, an analytics engine, and a machine learning engine. The sensor control system is communicatively coupled with a plurality of sensors arranged in a retail environment. The sensor control system is configured to control a setting of each of the plurality of sensors. The first shrinkage database includes retail shrinkage data for at least the retail environment. The retail shrinkage data includes at least one item at high risk for shrinkage or at least one time at high risk for shrinkage activity.”) Lobo teaches shrink modelling utilizing machine learning and the machine learning modelling is improved upon by Austin: training a plurality of machine learning algorithms using the training dataset (Austin Par. 26-31- The system 200 and method implement automatic machine learning processes which selectively utilize various components of different crowd-submitted solutions for optimally solving machine learning problems. The approach exploits the following components (of which 5 components are listed here, but other components could also be possible) for improving the prediction of machine learning algorithms: 1) utilizing additional data to help in the prediction, 2) designing more predictive derived data, also referred to as features or a data pipeline, from existing data, 3) utilizing different or more predictive algorithms, and/or 4) optimizing the parameters in a given algorithm, and/or 5) utilizing different techniques for validating the model performance (e.g. different cross-validation approaches such as form a predictive model using the full training set, or form the model by breaking the training set into sub-training sets, and forming individual models which are then “combined” to an overall model).” testing combinations of plurality of machine learning algorithms based on the one or more ...features such that each combination outputs a predictive result... (Austin Par. 52-“ The hybrid ML system 204 includes a ranking process 208, a select hybrid components process 210, training data 212 and test data 214. In the illustrated embodiment, the hybrid ML system 204 includes a leaderboard 216 and an application programming interface (API) 218. The hybrid ML system 204 operates to develop one or more ML solutions to ML problems such as business problem 220. The hybrid ML system 204 in exemplary embodiments is implemented using a processing system including at least one processor and a memory storing instructions to control operations of the processing system. The hybrid ML system 204 may receive information about the business problem 220 including the dataset and scoring method from the user 202 and develop an initial ML solution from these inputs. In alternative embodiments, the hybrid ML system 204 may receive the initial ML solution along with the information about the business problem 220 and the dataset and the scoring method from the user 202.) selecting two or more machine learning algorithms from the plurality of machine learning algorithms to form a hybrid machine learning model, wherein the hybrid machine learning model provides a lower margin of error than the margin of error achieved from any one of the plurality of machine learning algorithms individually( Austin Par. 54-57-“ In some embodiments, the ranking process 208 generates a ranked list of ML models received by or developed by the hybrid ML system 204. The ranked list of ML models may be displayed on leaderboard 216. FIGS. 2B shows examples of leaderboards 232, 234 for a particular machine learning problem. The select hybrid components process 210 receives other ML solutions from the other users 206 and selects components of the other ML solutions to identify a best ML solution. The select hybrid components process 210 provides training data 212 to train each respective ML solution to train the ML solution. The select hybrid components process 210 provides test data 214 to each respective ML solution to test the ML solution. In exemplary embodiments, training data 212 includes all information, including a target value to predict. In some exemplary embodiments, the test data 214 omits the target value. For example, in a sales forecasting problem, the competing models may be given as training data sales data for January and February of a given year but not for March of the year. The test data includes March data and the model is evaluated by the ranking process 208 by its accuracy for the prediction for March sales. The accuracy of predicting the target value using the test data 214 when processing the test data is the basis for scoring the model. A more accurate prediction results in a lower log loss score. The hybrid ML system 204 addresses some of the challenges in creating the best machine learning by openly exposing machine learning problems to a cross discipline crowd of experts represented by other users 206. The hybrid ML system 204 enables collecting from the other users 206 individual proposed solutions such as including or consisting of datasets 222, feature sets 224, machine learning algorithms 226, parameter sets 228 and other available information 230. Each proposed solution received from the other users 206 is individually ranked by the ranking process 208 and reported on the leaderboard 216. The hybrid ML”); and storing, in the memory, ... predictions generated from the hybrid machine learning model. (Austin Par.98-99-“ In the competition environment illustrated in FIG. 2D, the judge 272 plays events according to their timestamp information. Events are retrieved from a data store and, according to the time stamp, are presented to the competitor 274. When a score triggering event occurs, such as a retail point-of-sale card swipe in stream 280, the judge 272 pauses all data feeds and waits a specified time, such as 250 ms, to allow for the competitor 274 to provide its score. Once the score of the competitor 274 has been received, the judge 272 continues to replay events until the next triggering event occurs and the process repeats. If the competitor process is unable to provide its score within the time limit, the judge 272 marks the score as timed out and penalizes the competitor 274. In this way, it is impossible for the competitor 274 to leak future data into their risk score and cheat by having access to information that would have not been available in a real-world situation.”) Lobo and Austin are directed to retail machine learning modelling. Austin improves upon the machine learning techniques. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon machine modelling analysis of Lobo, as taught by Austin, by utilizing hybrid modelling techniques with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Lobo with the motivation of improving the prediction of machine learning algorithms: (Austin Par. 26). Lobo in view of Austin fail to teach the following feature taught by Lei: wherein the testing of combinations of the plurality of machine learning algorithms comprises feeding outputs of a first machine learning algorithm as inputs to a second machine learning algorithm in a particular order (Lei Par. 78;Par. 62-“ In one embodiment, multiple rounds of the functionality of FIG. 2 are executed in order to produce multiple optimized feature sets. Each feature set can be used as input into a forecasting algorithm to generate forecasting trained models. The multiple trained models can then be aggregated to generate a demand forecast, as disclosed in detail below in conjunction with FIG. 5. The output of the functionality of FIG. 2 is one or more optimized feature sets.”; Par. 65-“ In embodiments disclosed above, where one or more optimized feature sets are generated using the functionality of FIG. 2, embodiments use the optimized feature sets as input to forecasting algorithms to generate forecasting models. FIG. 5 is a flow diagram of the functionality of promotion effects module 16 of FIG. 1 when determining promotion effects at an aggregate level using multiple trained models in accordance with one embodiment. The multiple models can be generated using the functionality of FIG. 2.; Par. 72-73; Abstract). hybrid machine learning model…feeds the output of a first selected machine learning algorithm as inputs to a second selected machine learning algorithm in a particular order…(Lei Par. 78; Par. 19; Par. 62-“ In one embodiment, multiple rounds of the functionality of FIG. 2 are executed in order to produce multiple optimized feature sets. Each feature set can be used as input into a forecasting algorithm to generate forecasting trained models. The multiple trained models can then be aggregated to generate a demand forecast, as disclosed in detail below in conjunction with FIG. 5. The output of the functionality of FIG. 2 is one or more optimized feature sets.”; Par. 65-“ In embodiments disclosed above, where one or more optimized feature sets are generated using the functionality of FIG. 2, embodiments use the optimized feature sets as input to forecasting algorithms to generate forecasting models. FIG. 5 is a flow diagram of the functionality of promotion effects module 16 of FIG. 1 when determining promotion effects at an aggregate level using multiple trained models in accordance with one embodiment. The multiple models can be generated using the functionality of FIG. 2.; Par. 72-73; Abstract) Lobo, Austin and Lei are directed to machine learning modelling. Austin and Lei improve upon the machine learning techniques. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon modelling analysis of Lobo in view of Austin, as taught by Lei, by utilizing modelling techniques with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Lobo in view of Austin with the motivation of improving forecasting (Lei Par. 30). Regarding Claim 19, Lobo in view of Austin in further view of Lei teach The apparatus of claim 1,… Lobo teaches shrink modelling utilizing machine learning and the machine learning modelling is improved upon by Austin: wherein the instructions to select two or more machine learning algorithms from the plurality of machine learning algorithms to form a hybrid machine learning model comprises an order to apply the two or more selected machine learning algorithms. ( Austin Par.16; Par. 54-57-“ The ranked list of ML models may be displayed on leaderboard 216. FIGS. 2B shows examples of leaderboards 232, 234 for a particular machine learning problem. The select hybrid components process 210 receives other ML solutions from the other users 206 and selects components of the other ML solutions to identify a best ML solution.”); Lobo and Austin are directed to retail machine learning modelling. Austin improves upon the machine learning techniques. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon machine modelling analysis of Lobo, as taught by Austin, by utilizing hybrid modelling techniques with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Lobo with the motivation of improving the prediction of machine learning algorithms: (Austin Par. 26). Regarding Claim 20, Lobo in view of Austin in further view of Lei teach The apparatus of claim 1,… Lobo teaches shrink modelling utilizing machine learning and the machine learning modelling is improved upon by Austin: wherein the one or more shrink databases include weather information. ( Austin Par.48-“ Addressing a predictive problem starts with a base dataset, such as the existing data associated with the problem. Data scientists consider if there is other supplemental data to add to the dataset, comparable to adding other columns to the dataset. In an example, one problem seeks to predict sales on a given day. However, daily sales might depend on the weather on that day. If weather is not currently in the dataset, the data scientist may add weather data to the dataset. Similarly, other incremental data may be added if the other incremental data is predictive.”; Par. 142; Par. 154); Lobo and Austin are directed to retail machine learning modelling. Austin improves upon the machine learning techniques. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon machine modelling analysis of Lobo, as taught by Austin, by utilizing hybrid modelling techniques with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Lobo with the motivation of improving the prediction of machine learning algorithms: (Austin Par. 26). Regarding Claim 21, , Lobo in view of Austin in further view of Lei teach The apparatus of claim 1,… wherein the shrink predictions generated from the hybrid machine learning model include predictions of risk factors and likelihood of an item being subject to retail theft for any particular day or time(Lobo Par. 5-6-“ The analytics engine is communicatively coupled with: the first shrinkage database to access the retail shrinkage data, the second shrinkage database to access the external data, and the sensor control system to receive real-time sensor data from the plurality of sensors. The analytics engine is configured to compare the real-time sensor data with the external data to identify a high shrinkage risk situation. If a high shrinkage risk situation is identified, the analytics engine will: issue an alert, cause the sensor control system to alter the setting of at least one of the plurality of sensors, and update at least one of the first shrinkage database or the second shrinkage database. The machine learning engine is communicatively coupled with the first shrinkage database, the second shrinkage database, and the analytics engine to use the retail shrinkage data, the external data, and the issuance of an alert to conduct predictive modeling and cause the analytics engine to issue an alert if the predictive modeling determines that a high shrinkage risk situation is likely to occur.; Par. 8-“ In an embodiment, a method of predicting or identifying retail shrinkage activity includes: accessing retail shrinkage data comprising at least one item at high risk for shrinkage or at least one time at high risk for shrinkage activity in a retail environment; accessing external data related to shrinkage in a geographic area of the retail environment; receiving real-time sensor data from a plurality of sensors arranged in the retail environment; comparing the real-time sensor data with the external data to identify a high shrinkage risk situation and if a high shrinkage risk situation is identified, issuing an alert, causing a sensor control system to alter a setting of at least one of the plurality of sensors, and updating at least one of the retail shrinkage data or the external data; conducting predictive modeling using the retail shrinkage data, the external data, and the issuance of an alert; and issuing an alert if the predictive modeling determines that a high shrinkage risk situation is likely to occur.”) Claims 3, 9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Lobo, US Publication No. 20190027003A1, [hereinafter Lobo], Austin et al., US Publication No. 20210174130A1, [hereinafter Austin], in further view of Lei et al., US Publication No. 20190188536A1, [hereinafter Lei], and in further view of Cash et al., US Publication No. 20200265437A1, [hereinafter Cash]. Regarding Claim 3, Claim 9 and Claim 15, Lobo in view of Austin in further view of Lei teach The apparatus of claim 1, wherein the instructions to generate the one or more shrink features from the training dataset by identifying the attributes within the training dataset that are associated with the retail theft comprises instructions to:..., The method of claim 7, wherein generating the one or more shrink features from the training dataset by identifying the attributes within the training dataset that are associated with the retail theft further comprises:.... and The non-transitory computer readable medium of claim 13, wherein the code for generating the one or more shrink features from the training dataset by identifying the attributes within the training dataset that are associated with the retail theft further comprises code for:... Lobo in view of Austin in further view of Lei fail to teach the following feature taught by Cash: determine a pattern during a time period that directly correlates against increase in the retail theft for the time period. (Cash Par. 39-“ Once transaction patterns are generated 104 into a model which is then tested and validated, the model is provided 106 to a processing engine that evaluates transaction data in near or actual real-time to identify when a shrink event is occurring. The near or actual real-time monitoring of transaction may be implemented as an add-on application to a retailers POS software systems, through a network accessible cloud application or mobile application to alert a self-checkout attendant or a front-end supervisor to potential transactions where shrink may be occurring using the current item and transaction characteristics. Detection of a possible shrinkage event may also or alternatively be transmitted to another shrink-prevention solution, such as an image or video processing system that processes images or video to identify or confirm attempted theft or other system involved in shrinkage prevention. The cloud based solution may be implemented on a network accessible server that is located in a store, in a backend system of a store or a chain of stores, be hosted by a shirk detection service provider, and the like.); Lobo, Austin and Cash are directed to machine learning modelling. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon modelling analysis of Lobo in view of Austin, as taught by Cash, by utilizing modelling techniques with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Lobo in view of Austin with the motivation of identifying variation patterns between normal transaction patterns and fraudulent transactions (Cash Par. 5). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US Patent No. 11461690 B2 to Szeto et al. Abstract-“ A distributed, online machine learning system is presented. Contemplated systems include many private data servers, each having local private data. Researchers can request that relevant private data servers train implementations of machine learning algorithms on their local private data without requiring de-identification of the private data or without exposing the private data to unauthorized computing systems. The private data servers also generate synthetic or proxy data according to the data distributions of the actual data. The servers then use the proxy data to train proxy models. When the proxy models are sufficiently similar to the trained actual models, the proxy data, proxy model parameters, or other learned knowledge can be transmitted to one or more non-private computing devices. The learned knowledge from many private data servers can then be aggregated into one or more trained global models without exposing private data.” THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Chesiree Walton, whose telephone number is (571) 272-5219. The examiner can normally be reached from Monday to Friday between 8 AM and 5 PM. If any attempt to reach the examiner by telephone is unsuccessful, the examiner’s supervisor, Patricia Munson, can be reached at (571) 270-5396. The fax telephone numbers for this group are either (571) 273-8300 or (703) 872-9326 (for official communications including After Final communications labeled “Box AF”). Another resource that is available to applicants is the Patent Application Information Retrieval (PAIR). Information regarding the status of an application can be obtained from the (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAX. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, please feel free to contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Applicants are invited to contact the Office to schedule an in-person interview to discuss and resolve the issues set forth in this Office Action. Although an interview is not required, the Office believes that an interview can be of use to resolve any issues related to a patent application in an efficient and prompt manner. Sincerely, /CHESIREE A WALTON/Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Apr 22, 2021
Application Filed
Nov 30, 2022
Non-Final Rejection — §101, §103
Mar 03, 2023
Response Filed
May 10, 2023
Final Rejection — §101, §103
Jul 10, 2023
Response after Non-Final Action
Jul 25, 2023
Response after Non-Final Action
Aug 15, 2023
Request for Continued Examination
Aug 16, 2023
Response after Non-Final Action
Nov 14, 2023
Non-Final Rejection — §101, §103
Feb 20, 2024
Response Filed
Apr 26, 2024
Final Rejection — §101, §103
Jul 01, 2024
Response after Non-Final Action
Jul 10, 2024
Examiner Interview (Telephonic)
Jul 10, 2024
Response after Non-Final Action
Sep 03, 2024
Request for Continued Examination
Sep 03, 2024
Response after Non-Final Action
Sep 04, 2024
Response after Non-Final Action
Dec 11, 2024
Non-Final Rejection — §101, §103
Mar 17, 2025
Response Filed
May 20, 2025
Final Rejection — §101, §103
Jul 21, 2025
Examiner Interview Summary
Jul 21, 2025
Applicant Interview (Telephonic)
Aug 20, 2025
Request for Continued Examination
Aug 25, 2025
Response after Non-Final Action
Sep 23, 2025
Non-Final Rejection — §101, §103
Dec 24, 2025
Response Filed
Feb 24, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591903
SELF-SUPERVISED SYSTEM GENERATING EMBEDDINGS REPRESENTING SEQUENCED ACTIVITY
2y 5m to grant Granted Mar 31, 2026
Patent 12561640
METHOD AND SYSTEM TO STREAMLINE RETURN DECISION AND OPTIMIZE COSTS
2y 5m to grant Granted Feb 24, 2026
Patent 12555047
SYSTEMS AND METHODS FOR FORMULATING OR EVALUATING A CONSTRUCTION COMPOSITION
2y 5m to grant Granted Feb 17, 2026
Patent 12518292
HIERARCHY AWARE GRAPH REPRESENTATION LEARNING
2y 5m to grant Granted Jan 06, 2026
Patent 12333460
DISPLAY OF MULTI-MODAL VEHICLE INDICATORS ON A MAP
2y 5m to grant Granted Jun 17, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
30%
Grant Probability
58%
With Interview (+28.6%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 211 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month