DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
All amendments to claims as filed on 1/16/2026 have been entered and action follows:
Response to Arguments
Regarding objection to the claims
Applicant argues “Because their parent claims are different, claims 16-20 are not duplicates of claims 2-6.”, (see Remarks page 7).
Examiner respectfully disagrees, claims 2-6 and 16-20 depend on claim 1 as recited.
Therefore, the claims stand objected.
Regarding rejection under 35 USC 112(b)
Per the applicant’s amendments the rejections are withdrawn.
Regarding rejection under 35 USC 101
Applicant's arguments filed have been fully considered but they are not persuasive. Applicant argues that the claims are eligible under 35 USC 101, and they are not abstract idea.
Examiner respectfully disagrees, the claim other than reciting “applying, in real time, an integrated reasoning model,” nothing in the claim precludes the step from practically being performed in the human mind. For example, a person can very well conclude from seeing plurality of images that there is a person on a specific time at a specific location that may trigger an event and very well call for some intervention or notify an operator. Also, an integrated reasoning model in step is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing data. This generic limitation is no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Therefore, the rejection stands.
Regarding rejection under 35 USC 103
Applicant argues “Gurciullo is not understood to disclose, teach or suggest the features of independent claim 1, particularly at least the feature(s) of "querying, in real-time, at least one sensor and/or related sensor data for at least one attribute of at least one element, the at least one element identified in at least one of the plurality of images; tracking the at least one element through the plurality of images to produce, in real- time, spatio-temporal attributes of the at least one element; and ascertaining, in real-time, a state of an environment by integrating the at least one attribute and the spatio-temporal attributes of the at least one element, the environment associated with the site."”, (see Remarks page 14).
Examiner respectfully disagrees, Gurciullo in paragraph 0020, states …videos of the industrial environment or specific pieces of equipment…, and paragraph 0021, wherein …identify components such as, but not limited to, valves, gauges [is read as sensor] , switches, and indicators from imaging data, and classify the imaging data under such classifiers as “valve,” “gauge,” “switch,” and “indicator,” respectively…; and furthermore paragraph 0014, states …the raw collected imaging …data may be stored in historical databases to provide the ability to retroactively query the databases based on current needs… i.e. “querying” as noted in paragraph 0020 of original specification “… he analysis may further include querying at least one database for at least one database attribute of the at least one element…”, therefore, the data can be query at any given time.
Therefore, limitation of “querying, in real-time, at least one sensor and/or related sensor data for at least one attribute of at least one element, the at least one element identified in at least one of the plurality of images” is disclose by Gurciullo.
Also, Gurciullo in paragraph 0027, states …machine learning engine 120 to quantify imaging data …In particular, FIG. 3 illustrates a frame 310 of a video recording a pressure gauge [sensor] …At step 122, the machine learning engine 120 classifies the video recording under the “gauge” classifier. At step 126, the machine learning engine 120 detects the needle of the gauge and generates a graphical indicator 320. The position or the angle of the graphical indicator 320 [the readings of the gauge is read as the spatio-temporal i.e. at a specific time the needle is at a specific position] may be used as a quantifier for the pressure measured by the gauge. As the machine learning engine 120 tracks the angle of the graphical indicator 320 from frame to frame [is read as tracking] of the video recording, the measured pressure may be monitored in real or near-real time [ascertaining, in real-time, a state].
Therefore, the limitation “tracking the at least one element through the plurality of images to produce, in real- time, spatio-temporal attributes of the at least one element; and ascertaining, in real-time, a state of an environment by integrating the at least one attribute and the spatio-temporal attributes of the at least one element, the environment associated with the site.” Is disclose by Gurciullo.
Claim Objections
Claims 16-20 objected to under 37 CFR 1.75 as being a substantial duplicate of claims 2-6. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. The claims 1, 14 and 15 recites:
1. (exemplary claim) A method comprising:
Capturing, via one or more cameras, a plurality of images over time from a site; - pre/post activity
transmitting the plurality of images to a hub; - pre/post activity
querying, in a real-time, at least one sensor and/or related sensor data for at least one attribute of at least one element, the at least one element identified in at least one of the plurality of images; - mental process
tracking the at least one element through the plurality of images to produce in real-time, spatio-temporal attributes of the at least one element; - mental process
ascertaining, in real-time, a state of the environment by integrating the at least one attribute and the spatio-temporal attributes of the at least one element, the environment associated with the site; - mental process
applying, in real-time, [an integrated reasoning model] to the state of the environment to identify one or more events that may occur and/or are occurring at the site that require operator intervention; - mental process
and
notifying an operator of the one or more events. – pre/post activity
Step Analysis
1: Statutory Category?
Yes. The claim recites a series of steps and, therefore, is a process.
2A - Prong 1: Judicial Exception Recited?
Yes. The claim recites the limitations, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “applying an integrated reasoning model,” nothing in the claim precludes the step from practically being performed in the human mind. For example, but for the “applying an integrated reasoning model” language, the claim encompasses the user manually identifying the event may occur at the site using the images. This is a mental process.
2A - Prong 2: Integrated into a Practical Application?
No. The claim recites one additional element:
that an integrated reasoning model is used to perform the identifying step. The an integrated reasoning model in step is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing data. This generic limitation is no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
The claim is directed to the abstract idea.
2B: Claim provides an Inventive Concept?
No. As discussed with respect to Step 2A Prong Two, the additional element in the claim amounts to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
The claim is ineligible.
Furthermore, dependent claims 2-13 and 16-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception and therefore are rejected as well.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6 and 8-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gurciullo (US Pub. 2017/0357233).
With respect to claim 1, Gurciullo discloses A method (see Abstract) comprising:
Capturing, via one or more cameras, a plurality of images over time from a site, (see paragraph 0020, wherein …acquiring imaging… from imaging… hardware…);
[transmitting the plurality of images to a hub;]
querying, in a real-time, at least one sensor and/or related sensor data for at least one attribute of at least one element, the at least one element identified in at least one of the plurality of images, (see paragraph 0020, states …videos of the industrial environment or specific pieces of equipment…, and paragraph 0021, wherein …identify components such as, but not limited to, valves, gauges [is read as sensor] , switches, and indicators from imaging data, and classify the imaging data under such classifiers as “valve,” “gauge,” “switch,” and “indicator,” respectively…; and furthermore paragraph 0014, states …the raw collected imaging …data may be stored in historical databases to provide the ability to retroactively query the databases based on current needs… i.e. “querying” as noted in paragraph 0020 of original specification “… he analysis may further include querying at least one database for at least one database attribute of the at least one element…”, therefore, the data can be query at any given time.);
tracking the at least one element through the plurality of images to produce, in real-time, spatio-temporal attributes of the at least one element; ascertaining, in real-time, a state of the environment by integrating the at least one attribute and the spatio-temporal attributes of the at least one element, the environment associated with the site, (see paragraph 0027, wherein …machine learning engine 120 to quantify imaging data …In particular, FIG. 3 illustrates a frame 310 of a video recording a pressure gauge. …At step 122, the machine learning engine 120 classifies the video recording under the “gauge” classifier. At step 126, the machine learning engine 120 detects the needle of the gauge and generates a graphical indicator 320. The position or the angle of the graphical indicator 320 [the readings of the gauge is read as the spatio-temporal i.e. at a specific time the needle is at a specific position] may be used as a quantifier for the pressure measured by the gauge. As the machine learning engine 120 tracks the angle of the graphical indicator 320 from frame to frame [is read as tracking] of the video recording, the measured pressure may be monitored in real or near-real time [ascertaining, in real-time, a state]);
applying, in real-time, an integrated reasoning model to the state of the environment to identify one or more events that may occur and/or are occurring at the site that require operator intervention; and notifying an operator of the one or more events, (see paragraph 0013, wherein …the machine learning algorithm may adapt the event detection, alerting, and control protocols of the automated process based on the feedback...), as claimed.
However, Gurciullo fails to explicitly disclose transmitting the plurality of images to a hub, as claimed.
But, Gurciullo recites in paragraph 0019 “…The computer system may be entirely contained at one location or may also be implemented across a closed or local network, a cluster of connected computers, an internet-centric network, or a cloud platform…”, this obviate that the images can be process locally or transmitted over a network to remote system i.e. “hub” as claimed.
Therefore, it would have been obvious to one ordinary skilled in the art at the effective date of invention to utilize the teaching of transmitting the images to a remote system “hub” in order to process the data that yields the predictable results, as claimed.
With respect to claim 2, Gurciullo further discloses wherein the plurality of images are at least a portion of a video, (see paragraph 0012, wherein The present disclosure describes visual and acoustic analytics using inexpensive, easy-to-install commodity cameras and audio recorders (e.g., microphones) to respectively acquire imaging data (i.e., images and/or videos)), as claimed.
With respect to claim 3, Gurciullo further discloses wherein the plurality of images comprise a visible light image, an infrared image, an ultraviolet image, a thermal image, a night vision image, or any combination thereof, (see paragraph 0012, wherein The present disclosure describes visual …analytics using inexpensive, easy-to-install commodity cameras “a visible light image” …acquire imaging data (i.e., images and/or videos)), as claimed.
With respect to claim 4, Gurciullo further discloses wherein the plurality of images comprise images from two or more different videos, (see paragraph 0020, wherein …method 100 begins at step 110 by acquiring imaging …data from imaging …hardware, …The imaging hardware may be one or more commodity cameras, along with any required peripherals, capable of taking still images and/or videos of the industrial environment or specific pieces of equipment in the industrial environment, from one or more angles “images comprise images from two or more different videos”…), as claimed.
With respect to claim 5, Gurciullo further discloses wherein the one or more elements comprise a person, a group of people, a vehicle, equipment or a component thereof, or any combination thereof, (see paragraph 0021, wherein …For example, the machine learning engine 120 may identify components such as, but not limited to, valves, gauges, switches, and indicators from imaging data “equipment or a component thereof”…), as claimed.
With respect to claim 6, Gurciullo further discloses wherein the at least one sensor comprises an identification scanner, a barrier sensor, an equipment sensor, and any combination thereof, (see paragraph 0021, wherein …For example, the machine learning engine 120 may identify components such as, but not limited to, valves, gauges, switches, and indicators from imaging data “an equipment sensor”…), as claimed.
With respect to claim 8, Gurciullo further discloses querying at least one database for at least one database attribute of the at least one element, (see paragraph 0014, wherein …the raw collected imaging …data may be stored in historical databases to provide the ability to retroactively query the databases based on current needs…), as claimed.
With respect to claim 9, Gurciullo further discloses wherein the tracking uses a convolution neural network to analyze a position of the at least one element through the plurality of images, (see paragraph 0027 wherein …an application of the machine learning engine 120… is read as “a convolutional neural network”) as claimed.
With respect to claim 10, Gurciullo further discloses wherein the ascertaining of the state of the environment further integrates the at least one attribute and the spatio-temporal attributes with one or more of (a) at least one database attribute, (b) at least one safety attribute, or (c) at least one relational attribute, (see paragraph 0027, wherein …the machine learning engine 120 to quantify imaging data containing a gauge, according to an embodiment of the present disclosure. In particular, FIG. 3 illustrates a frame 310 of a video recording a pressure gauge. …At step 122, the machine learning engine 120 classifies the video recording under the “gauge” classifier. At step 126, the machine learning engine 120 detects the needle of the gauge and generates a graphical indicator 320. The position or the angle “spatio-temporal” of the graphical indicator 320 may be used as a quantifier for the pressure measured by the gauge. As the machine learning engine 120 tracks the angle of the graphical indicator 320 from frame to frame of the video recording, the measured pressure may be monitored “(b) at least one safety attribute, or (c) at least one relational attribute” in real or near-real time), as claimed.
With respect to claim 11 and 12, Gurciullo discloses all the elements as claimed and as disclose in claim 1 above. However, Gurciullo fails to explicitly disclose wherein the integrating uses a dynamic Bayesian network, and wherein the integrated reasoning model incorporates a Bayesian probabilistic model, as claims 11 and 12 respectively.
But it is well known “Official notice” in the art to utilize the Bayesian algorithm “a dynamic Bayesian network” or “a Bayesian probabilistic model” to classify a set of variables (see US Pub. 2016/0148080, paragraph 0119).
Therefore, it would have been obvious to one ordinary skilled in the art at the effective date of invention to utilize the well known teachings of using Bayesian network or Bayesian probabilistic model for integrating the attributes of the element of the images acquired of the industrial environment, this yields the predictable results as claimed.
With respect to claim 13, Gurciullo further discloses wherein the integrated reasoning model incorporates a data-driven model, (see paragraph 0027, wherein …the machine learning engine 120 detects the needle of the gauge and generates a graphical indicator 320. The position or the angle of the graphical indicator 320 may be used as a quantifier for the pressure measured by the gauge. As the machine learning engine 120 tracks the angle of the graphical indicator 320 from frame to frame of the video recording, the measured pressure may be monitored in real or near-real time), as claimed.
Claims 14 and 15 are rejected for the same reasons as set forth in the rejections of claim 1, because claims 14 and 15 are claiming subject matter of similar scope claimed in claim 1.
Claims 16-20 are rejected as claims 2-6, because claims 16-20 are duplicate claims, i.e. the subject matter of claims 16-20 is same as claimed in claims 2-6.
Claims 7 is rejected under 35 U.S.C. 103 as being unpatentable over Gurciullo (US Pub. 2017/0357233) in view of Kobayashi et al (US 11,704,930).
With respect to claim 7, Gurciullo discloses all the elements as claimed and as disclose in claim 1 above. However, Gurciullo fails to explicitly disclose wherein the at least one attribute comprises a badge identification number, a vehicle identification number, a person’s name, a company associated with the at least one element, a job title of a person, and any combination thereof, as claimed.
Kobayashi teaches wherein the at least one attribute comprises a badge identification number, a vehicle identification number, a person’s name, a company associated with the at least one element, a job title of a person, and any combination thereof, (see figure 1B, numerical 51A, “a vehicle identification number”), as claimed.
It would have been obvious to one ordinary skilled in the art at the effective date of invention to combine the references as they are analogous because they are solving similar problem of monitoring using image analysis. The teaching of Kobayashi to read a vehicle identification number of a vehicle in order to monitor can be incorporated into Gurciullo system as suggested in paragraph 0021, …algorithms to identifying, from the monitoring data…, for suggestion, and modifying the system yields a monitoring system (see col. 1, lines 19-21), for motivation.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIKKRAM BALI whose telephone number is (571)272-7415. The examiner can normally be reached Monday-Friday 7:00AM-3:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VIKKRAM BALI/Primary Examiner, Art Unit 2663