Prosecution Insights
Last updated: April 19, 2026
Application No. 18/684,618

DEEP LEARNING-BASED ABNORMAL BEHAVIOR DETECTION SYSTEM THROUGH DEIDENTIFICATION DATA ANALYSIS

Non-Final OA §103§DP
Filed
Feb 16, 2024
Examiner
GEBRESLASSIE, WINTA
Art Unit
2677
Tech Center
2600 — Communications
Assignee
UNIUNI CORPORATION
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
101 granted / 133 resolved
+13.9% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
53 currently pending
Career history
186
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
66.4%
+26.4% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 133 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-8 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-8 of US Patent No. 12444236 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the invention variously defined by the claims of the instant application is anticipated and/or is an obvious variant of the invention as stipulated by the claims of the 18265364 application. The copending claims recites “anonymized image data”, whereas the instant claims recite “de-identified image information data”. These terms are functionally equivalent in the context of the claimed system, as both describe image data from which personally identifiable information of the subject has been removed prior to processing by the deep learning server. The claims do not recite any structural, algorithmic or procedural differences in how anonymized or de-identification is performed, nor do they impose any technical constraints that would distinguish one from the other. Substituting “anonymized” for “de-identified” represents nothing more than a semantic variation that does not impact a patentable distinction. Accordingly, the claimed subject matter of the instant claims would have been an obvious variant of the copending claims to one of ordinary skill in the art at the time of invention. Below is a table showing the conflicting claims: 18/684,618 18/265,364 1.A system for detecting an abnormal behavior based on deep learning, the system comprising: a detection device which detects a behavior of a subject within a predetermined area to generate de-identified image information data; a deep learning server which receives the de-identified image information data from the detection device, extracts feature information from the de-identified image information data, outputs behavior prediction information reflecting a temporal change of the feature information, compares the behavior prediction information with pre- learned behavior patterns to calculate similarity, and determines whether there is an abnormal behavior by determining whether the behavior prediction information belongs to a normal behavior type or an abnormal behavior type based on the similarity; and a web server which receives an abnormal behavior determination result from the deep learning server to generate a warning signal notifying that the behavior of the subject is an abnormal behavior when it is determined that the de-identified image information data belongs to the abnormal behavior type, and transmits the warning signal to a management server or a terminal. 1.A deep learning-based abnormal behavior detection system comprising: a detection device configured to detect a behavior of a subject within a predetermined area and generate anonymized image data; a deep learning server configured to: receive the anonymized image data from the detection device; extract feature information from the anonymized image data; output behavior prediction information reflecting temporal changes of the feature information; compare the behavior prediction information with pre- learned behavior patterns to calculate similarity; and determine whether the behavior prediction information belongs to a normal behavior type or an abnormal behavior type based on the similarity to determine abnormal behavior; and a web server configured to receive a result of determining the abnormal behavior from the deep learning server and to generate and transmit to a management server or a terminal a warning signal indicating that the behavior of the subject is an abnormal behavior when the anonymized type. 2. The system of claim 1, wherein the detection device includes a time of flight (ToF) sensor. 3. The system of claim 1, wherein the deep learning server includes: a CNN(Convolutional Neural Network) which extracts feature information from the received de-identified image information data; an LSTM(Long Short Term Memory network) which receives the feature information from the CNN in time series to output behavior prediction information reflecting a temporal change; and a classification layer which compares the behavior prediction information received from the LSTM with learned behavior patterns, wherein the classification layer determines similarity of the behavior prediction information with respect to the learned behavior patterns based on the similarity, and determines a behavior type to which the behavior pattern determined to be most similar belongs as a behavior type of the behavior prediction information. 7. The system of claim 4, further comprising a warning notification unit which generates a warning signal such that the subject recognizes the abnormal behavior when the abnormal behavior determination unit classifies the de-identified image information data as abnormal behavior data. 8. The system of claim 4, further comprising a risk notification unit which recognizes an external risk when the abnormal behavior determination unit classifies the de-identified image information data as abnormal behavior data and transmits a risk signal to the management server. 2. The deep learning-based abnormal behavior detection system of claim 1, wherein the detection device comprises a time of flight (ToF) sensor. 3. The deep learning-based abnormal behavior detection system of claim 1, wherein the deep learning server comprises: a convolutional neural network (CNN) configured to extract feature information from the received anonymized image data; an long short term memory networks (LSTM) configured to receive the feature information from the CNN in time series and output the behavior prediction information reflecting the temporal changes; and a classification layer configured to compare the behavior prediction information received from the LSTM with the learned behavior patterns, wherein the classification layer determines a learned behavioral pattern most similar to the behavior prediction information among the learned behavioral patterns based on the similarity, and determines a behavior type to which the most similar behavior pattern belongs as a behavior type of the behavior prediction information. 7. The deep learning-based abnormal behavior detection a warning notifier configured to generate a warning signal to be recognized by the subject when the anonymized image data is identified as abnormal behavior data by the abnormal behavior determiner. 8. The deep learning-based abnormal behavior detection system of claim 4, further comprising: an emergency notifier configured to transmit, when the anonymized image data is identified as abnormal behavior data by the abnormal behavior determiner, an emergency signal to the management server in recognition of an external emergency. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: reception unit, determination unit, transmission unit, notification unit, in claims 4, 6 and 7. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1, 4, and 7are rejected under 35 U.S.C. 103 as being unpatentable over Ni et al. (US 20120134532 A1) in view of Roland et al. (US 20210295581 A1). Regarding claim 1, Ni et al. teach a system for detecting an abnormal behavior based on deep learning (see para [0008]; “The machine learning mechanism based on multiple features is adopted in our invention for detecting abnormal behaviors automatically”, see also para [0032]; “the behavior model establishing unit 13 uses unsupervised learning method”, and [0061]; “The system can automatically recognize the normal and abnormal behavior through an unsupervised learning process even under complex traffic regulation system or being applied in various environments”, Note: Deep learning is a subset of machine learning, and operating in "various environments" and "complex traffic regulation systems," which aligns with the high-performance capabilities of deep learning in, i.e., video surveillance or traffic), the system comprising: a detection device which detects a behavior of a subject within a predetermined area (see para [0024]; “Firstly, the system receives the monitoring data from one or more sensors installed in the environment under surveillance (step S201)”, Note: In the environment under surveillance implies predetermined area); a deep learning server which receives the de-identified image information data from the detection device (see Abstract; “a system and a method for abnormal behavior detection using automatic classification of multiple features. Features from various sources, including those extracted from camera input through digital image analysis, are used as input to machine learning algorithms. These algorithms group the features and produce models of normal and abnormal behaviors. Outlying behaviors, such as those identified by their lower frequency, are deemed abnormal”, para [0024]; “the system receives the monitoring data from one or more sensors installed in the environment under surveillance (step S201)” Note: processing unit/machine learning that receives image data, extracts features, and classifies normal vs abnormal behavior corresponds functionally to the claimed deep learning server), extracts feature information from the de-identified image information data, outputs behavior prediction information reflecting a temporal change of the feature information (see para [0009]; “pattern recognition techniques to extract various features of each object from captured images. Rely on the techniques of machine learning”, see also para [0021]; “In the case of surveillance video, the feature extraction unit 11 extracts all distinctive features and continuously-changed behaviors of an extracted object from the image sequence by various image analysis techniques”, and para [0023]; “the output unit 17 outputs the behavior model constructed by the behavior model establishing unit 13, and behavior type classification results are outputted by the behavior determining unit 15” Note; image sequence by various image analysis implies a temporal change of the feature information), compares the behavior prediction information with pre- learned behavior patterns to calculate similarity, and determines whether there is an abnormal behavior by determining whether the behavior prediction information belongs to a normal behavior type or an abnormal behavior type based on the similarity (see para [0041]; “Besides the introduction of supervised classification parameters (303),..the behavior model establishing unit 33 may employ an unsupervised learning method to perform the cluster analysis on each object's behavior based on the similarity among feature set. After that, the behaviors of the plurality of objects are divided into a plurality of groups. According to the probability distribution of groups, the groups with the probability higher than a threshold are regarded as the normal behavior. Therefore, a normal behavior model is established based on the associated feature sets. On the contrary, the groups with the probability lower than the threshold are regarded as the abnormal behavior, and an abnormal behavior model is then established. The mentioned threshold can be set manually in advance, or be automatically acquired through a learning process”, see also para [0042]; “The behavior determining unit 35 performs a comparison between the observed object's feature set and the behavior models. If the feature set is in accordance with any normal behavior model, the behavior of the object is regard as normal. On the contrary, if the object's feature set does not fit with any normal behavior model, or fit with any abnormal behavior model, the object's behavior is regarded as abnormal”); and a web server which receives an abnormal behavior determination result from the deep learning server to generate a warning signal notifying that the behavior of the subject is an abnormal behavior when it is determined that the de-identified image information data belongs to the abnormal behavior type, and transmits the warning signal to a management server or a terminal (see para [0032]; “a behavior is associated with a normal behavior model due to its high frequency; in the contrary, a rare behavior is associated with an abnormal behavior model. In another embodiment, the mentioned behavior model establishing unit 13 can be supervised, the model is established according to some supervised classification parameter. Therefore, the abnormal behaviors can be clearly defined….Next, the output unit 17 can be a storage device, a display, a transmitter, or any combination of any number of these devices. The objectives of the system are for real time alarm, or store the abnormal events for further citation”, see also para [0041]; “These behavior models established by the behavior model establishing unit are transmitted to the behavior determining unit, and be used for abnormal behavior determination” Note: a unit that receives abnormal determination result and generates/transmits a warning to external systems corresponds to the “web server”). However, Ni et al. does not teach to generate de-identified image information data. In the same field of endeavor, Roland et al. teaches to generate de-identified image information data (see Abstract; “An anonymization apparatus 6 is proposed for the generation of anonymized images 9, wherein surveillance images 5 are provided through video surveillance of a surveillance region 3 by means of at least one camera 2, with a recognition module 11” Note: de-identified image implies anonymized images). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a method for abnormal behavior detection using automatic classification of multiple features of Ni et al. in view of an anonymization apparatus for the generation of anonymized images of Roland et al. in order to testing algorithms without providing personally specific data (see Abstract). Regarding claim 4, Ni teaches a system for analyzing a behavior pattern by detecting a behavior of a subject within a predetermined area (see para [0009`]; “image analysis and pattern recognition techniques to extract various features of each object from captured images. Rely on the techniques of machine learning, the behaviors of the monitored objects can be classified into several groups and the associated behavior models are established”), an abnormal behavior determination unit which extracts feature information from the received de-identified image information data, outputs behavior prediction information reflecting a temporal change of the feature information (see para [0030]; “the image sequence 301 generated from the monitoring environment is inputted to the feature extraction unit 31 (step S401) and then the spatial feature extraction module 310 and the temporal feature extraction module 312 extract the spatial and the temporal features from the image sequence 301 by some image analysis techniques (step S403), respectively”, see also para [0028]; “pattern recognition techniques to extract various features of each object from captured images. Rely on the techniques of machine learning, the behaviors of the monitored objects can be classified into several groups and the associated behavior models are established”), compares the behavior prediction information with pre-learned behavior patterns to calculate similarity and determines whether there is an abnormal behavior by determining whether the behavior prediction information belongs to a normal behavior type or an abnormal behavior type based on the similarity, and a transmission unit which transmits, to a management server or terminal, (see para [0042]; “The behavior determining unit 35 performs a comparison between the observed object's feature set and the behavior models. If the feature set is in accordance with any normal behavior model, the behavior of the object is regard as normal. On the contrary, if the object's feature set does not fit with any normal behavior model, or fit with any abnormal behavior model, the object's behavior is regarded as abnormal. The result of the determination is then transferred to the output unit 37, and the result is outputted or stored in a memory by the output unit 37”); a signal notifying that the behavior of the subject is an abnormal behavior when the abnormal behavior determination unit determines that the de-identified image information data belongs to the abnormal behavior type (see para [0032]; “a behavior is associated with a normal behavior model due to its high frequency; in the contrary, a rare behavior is associated with an abnormal behavior model. In another embodiment, the mentioned behavior model establishing unit 13 can be supervised, the model is established according to some supervised classification parameter. Therefore, the abnormal behaviors can be clearly defined. [0033] Next, the output unit 17 can be a storage device, a display, a transmitter, or any combination of any number of these devices. The objectives of the system are for real time alarm, or store the abnormal events for further citation”). In the same field of endeavor, Ronald teaches the system comprising: a reception unit which receives de-identified image information data in which the behavior of the subject is detected within the predetermined area in time series from a sensor for detecting the predetermined area (see para [0006]; “A surveillance region can be monitored by means of a camera. The surveillance region is, for example, an interior region or an exterior region. The surveillance region is preferably a public region, for example a public authority, an airport or a railway station. The surveillance region can, in particular, be a region of a vehicle, monitored, for example, for autonomous driving. The camera is preferably arranged and/or can be arranged in the surveillance region.. The camera is in particular configured to generate a data stream of images, in particular a video sequence, as surveillance images”, see also para [0009]; “in particular, that an anonymization apparatus is proposed in the processing module that permits an anonymization of persons in a complete video sequence, wherein the behavior of individual persons and/or interactions of groups of persons are retained. The anonymized images can thus, for example, be used for training or testing algorithms without providing personally specific data”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a method for abnormal behavior detection using automatic classification of multiple features of Ni et al. in view of an anonymization apparatus for the generation of anonymized images of Roland et al. in order to testing algorithms without providing personally specific data (see para [0006]). Regarding claim 7, the rejection of claim 4 is incorporated herein. Ni et al. in the combination further teach further comprising a warning notification unit which generates a warning signal such that the subject recognizes the abnormal behavior when the abnormal behavior determination unit classifies the de-identified image information data as abnormal behavior data (see para [0032]; “Therefore, the abnormal behaviors can be clearly defined….Next, the output unit 17 can be a storage device, a display, a transmitter, or any combination of any number of these devices. The objectives of the system are for real time alarm, or store the abnormal events for further citation”, see also para [0041]; “These behavior models established by the behavior model establishing unit are transmitted to the behavior determining unit, and be used for abnormal behavior determination”). Claims 2-3, 5-6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Ni et al. in view of Roland et al. as applied in claim 1 above and further in view of Cella et al. (US 20240118702 A1). Regarding claim 2, the rejection of claim 1 is incorporated herein. the combination of Ni et al. and Roland et al. as a whole does not teach wherein the detection device includes a time of flight (ToF) sensor. Cella et al. teaches wherein the detection device includes a time of flight (ToF) sensor (see para [2044]; “a camera 12608 is configured to capture images of objects 12606 located within a field of view of the camera 12608. The camera 12608 may be a standard digital camera (i.e., cameras including CCD or CMOS sensors), stereoscopic camera, infrared image sensor, time of flight (TOF) camera”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a method for abnormal behavior detection using automatic classification of multiple features of Ni et al. in view of an anonymization apparatus for the generation of anonymized images of Roland et al. and a computing devices of a set of value chain network entities for communication with a computing device of an enterprise operator of Cella et al. in order to adjust various optical parameters including lens shape, focal length, liquid materials, and environment, lens arrangement (see para [2044]). Regarding claim 3, the rejection of claim 1 is incorporated herein. Cella et al. in the combination further teach wherein the deep learning server includes: a CNN(Convolutional Neural Network) which extracts feature information from the received de-identified image information data (see para [1182]; “The convolutional layers of the convolutional neural network serve as feature extractors capable of learning and decomposing the input image into hierarchical features”); an LSTM (Long Short Term Memory network) which receives the feature information from the CNN in time series to output behavior prediction information reflecting a temporal change (see para [1158]; “sequential input data can include time-series data (e.g., sensor data versus time or imagery captured at different times). For example, a recurrent neural network can analyze sensor data versus time to detect or predict a swipe direction… Sequential input data may include.. sequential object states; etc. In some example embodiments, recurrent neural networks include long short-term (LSTM) recurrent neural networks”); and a classification layer which compares the behavior prediction information received from the LSTM with learned behavior patterns, wherein the classification layer determines similarity of the behavior prediction information with respect to the learned behavior patterns based on the similarity (see para [1251]; “the location determination circuit may compare the environment digital twin generated by the environment digital twin circuit 9142 to a pre-stored environment digital twin 9154 to determine a position of the mobile system”, see also para [1191]; “These classification vectors are compared with the predetermined classification vectors. The error (e.g., the squared sum of differences, log loss, Softmax log loss) between the classification vectors of the CNN and the predetermined classification vectors is determined”, and para [0016]; “The training data set for the set of AI-based learning models may include one of a set of objects or events that are labeled to classify the set of objects or events according to a classification taxonomy that may include at least one of the operating state, the fault condition, the operating flow, or the behavior”), and determines a behavior type to which the behavior pattern determined to be most similar belongs as a behavior type of the behavior prediction information (see para [1354]; “based on detecting certain types of individual and/or group behaviors (e.g., as determined by classification module 9510) and predicting that conditions are becoming abnormal or unsafe (e.g., as determined by prediction module 9520)”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a method for abnormal behavior detection using automatic classification of multiple features of Ni et al. in view of an anonymization apparatus for the generation of anonymized images of Roland et al. and a computing devices of a set of value chain network entities for communication with a computing device of an enterprise operator of Cella et al. in order to efficiently learn increasingly complex and abstract visual concepts (see para [1182]). Regarding claim 5, the rejection of claim 4 is incorporated herein. Cella et al. in the combination further teach wherein the sensor includes a time of flight (ToF) or thermal imaging sensor (see para [2044]; “a camera 12608 is configured to capture images of objects 12606 located within a field of view of the camera 12608. The camera 12608 may be a standard digital camera (i.e., cameras including CCD or CMOS sensors), stereoscopic camera, infrared image sensor, time of flight (TOF) camera”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a method for abnormal behavior detection using automatic classification of multiple features of Ni et al. in view of an anonymization apparatus for the generation of anonymized images of Roland et al. and a computing devices of a set of value chain network entities for communication with a computing device of an enterprise operator of Cella et al. in order to adjust various optical parameters including lens shape, focal length, liquid materials, and environment, lens arrangement (see para [2044]). Regarding claim 6, the rejection of claim 4 is incorporated herein. Cella et al. in the combination further teach wherein the abnormal behavior determination unit includes: a CNN(Convolutional Neural Network) which extracts feature information from the received de-identified image information data (see para [1182]; “The convolutional layers of the convolutional neural network serve as feature extractors capable of learning and decomposing the input image into hierarchical features”); an LSTM(Long Short Term Memory network) which receives the feature information from the CNN in time series to output behavior prediction information reflecting a temporal change (see para [1158]; “sequential input data can include time-series data (e.g., sensor data versus time or imagery captured at different times). For example, a recurrent neural network can analyze sensor data versus time to detect or predict a swipe direction… Sequential input data may include.. sequential object states; etc. In some example embodiments, recurrent neural networks include long short-term (LSTM) recurrent neural networks”); and a classification layer which compares the behavior prediction information received from the LSTM with learned behavior patterns, the classification layer determines similarity of the behavior prediction information with respect to the learned behavior patterns based on the similarity (see para [1251]; “the location determination circuit may compare the environment digital twin generated by the environment digital twin circuit 9142 to a pre-stored environment digital twin 9154 to determine a position of the mobile system”, see also para [1191]; “These classification vectors are compared with the predetermined classification vectors. The error (e.g., the squared sum of differences, log loss, Softmax log loss) between the classification vectors of the CNN and the predetermined classification vectors is determined”, and para [0016]; “The training data set for the set of AI-based learning models may include one of a set of objects or events that are labeled to classify the set of objects or events according to a classification taxonomy that may include at least one of the operating state, the fault condition, the operating flow, or the behavior”), and determines a behavior type to which the behavior pattern determined to be most similar belongs as a behavior type of the behavior prediction information (see para [1354]; “based on detecting certain types of individual and/or group behaviors (e.g., as determined by classification module 9510) and predicting that conditions are becoming abnormal or unsafe (e.g., as determined by prediction module 9520)”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a method for abnormal behavior detection using automatic classification of multiple features of Ni et al. in view of an anonymization apparatus for the generation of anonymized images of Roland et al. and a computing devices of a set of value chain network entities for communication with a computing device of an enterprise operator of Cella et al. in order to efficiently learn increasingly complex and abstract visual concepts (see para [1182]). Regarding claim 8, the rejection of claim 4 is incorporated herein. Cella et al. in the combination further teach further comprising a risk notification unit which recognizes an external risk when the abnormal behavior determination unit classifies the de-identified image information data as abnormal behavior data and transmits a risk signal to the management server (see para [1330]; “The model processing circuit 9432 may use the model to monitor inputs 9492 and enforce the governance standards as specified by the model. For example, the model processing circuit 9432 may generate warnings and alarms, shut down or otherwise modify systems (e.g., if safety parameters have been exceeded), modify/transform/configure data to comply with governance, and/or the like”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify a method for abnormal behavior detection using automatic classification of multiple features of Ni et al. in view of an anonymization apparatus for the generation of anonymized images of Roland et al. and a computing devices of a set of value chain network entities for communication with a computing device of an enterprise operator of Cella et al. in order to minimize the impact of the disruption (see para [1330]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINTA GEBRESLASSIE whose telephone number is (571)272-3475. The examiner can normally be reached Monday-Friday9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WINTA GEBRESLASSIE/Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
Feb 10, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579683
IMAGE VIEW ADJUSTMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12573238
BIOMETRIC FACIAL RECOGNITION AND LIVENESS DETECTOR USING AI COMPUTER VISION
2y 5m to grant Granted Mar 10, 2026
Patent 12530768
SYSTEMS AND METHODS FOR IMAGE STORAGE
2y 5m to grant Granted Jan 20, 2026
Patent 12524932
MACHINE LEARNING IMAGE RECONSTRUCTION
2y 5m to grant Granted Jan 13, 2026
Patent 12511861
DETECTION OF ANNOTATED REGIONS OF INTEREST IN IMAGES
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+24.7%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 133 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month