Prosecution Insights
Last updated: April 19, 2026
Application No. 18/414,506

METHOD AND SYSTEM FOR DETERMINING A PROPAGATION PATH OF FIRE OR SMOKE

Non-Final OA §101§102
Filed
Jan 17, 2024
Examiner
ADU-JAMFI, WILLIAM NMN
Art Unit
2677
Tech Center
2600 — Communications
Assignee
L&T Technology Services Limited
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
25
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
36.8%
-3.2% vs TC avg
§102
28.7%
-11.3% vs TC avg
§112
14.9%
-25.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1 and 13 are objected to because of the following informalities: In Claim 1 line 3 and Claim 13 line 5, there should be a space between “(ROIs)” and “in.” Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The method of claim 1 is directed to a process, which is one of the statutory categories of invention, and passes Step 1: Statutory Category- MPEP § 2106.03. However, the following limitations of Claim 1 recite steps that can be performed in the human mind or with pen and paper, therefore failing Step 2A Prong One. These steps constitute mental processes because they describe acts of observation, evaluation, and judgement that a human can practically perform mentally. Determining, by the processor, a plurality of regions of interests (ROIs) in each of the plurality of image frames by: Detecting, by the processor, one or more objects in each of the plurality of image frames; and Determining, by the processor, one or more masks based on motion detection and color segmentation for each of the one or more objects in each of the plurality of image frames; Determining, by the processor, a class for each of the plurality of ROIs using a deep learning model, wherein the class is a fire class or a smoke class, and wherein the deep learning model is trained based on training data corresponding to fire and smoke; Determining, by the processor, a direction of propagation path of fire or smoke based on a displacement in coordinates of a centroid of each of the plurality of ROIs in each of the plurality of image frames, wherein the displacement is computed for each of the plurality of frames in a time sequence of occurrence of the plurality of frames; and Rendering, by the processor, an output based on the detection of the class along with the direction of propagation path of fire or smoke. Claim 1 fails Step 2A Prong Two because the additional elements beyond the judicial exception, including a processor and deep learning model, do not integrate the judicial exception into a practical application. The processor and deep learning model are described only as performing ordinary data processing operations, which are generic computer functions that do not improve the functioning of a computer or any other technology (MPEP § 2106.05(a)). Furthermore, they are computer components used as a tool amounting to instructions for applying the abstract idea (MPEP § 2106.05(f)). The claim also does not impose meaningful limits on the computer components such that the method is tied to a particular machine; the processor and deep learning model may operate on any generic computing system (MPEP § 2106.05(b)). Claim 1 also fails Step 2B, as these additional elements are well-understood, routine, and conventional (WURC), adding nothing significantly more than the abstract idea itself (MPEP § 2106.07(a)(III)). A processor is a generic computer element that is WURC (see MPEP § 2106.05(d)), and the deep learning model is also WURC, as shown by Gaur et. al who states that “deep learning techniques are gaining momentum for detecting fire flames and smoke” (Gaur: Deep Learning-Based Fire Flame and Smoke Detection). As claims 7 and 13 contain this identical ineligible subject matter, they are also rejected. Claims 2-6 recite steps that can be performed in the human mind or with pen and paper, therefore failing Step 2A Prong One. These steps constitute mental processes because they describe acts of observation, evaluation, and judgement that a human can practically perform mentally. Furthermore, claims 4 and 5 recite mathematical concepts, therefore failing Step 2A Prong One. Mathematical concepts are defined as mathematical relationships, mathematical formulas or equations, or mathematical calculations. The claim must recite (i.e. set forth or describe) a mathematical concept rather than include limitations that are merely based on math. These claims fail Step 2A Prong Two and Step 2B because the additional elements beyond the judicial exception, including a processor, do not integrate the judicial exception into a practical application and are WURC (see claim 1 analysis above). The other additional elements beyond the judicial exception, including a Kalman filter, also do not integrate the judicial exception into a practical application (see claim 1 analysis above) and are WURC. Bai et. al states that “The Kalman filter (KF) is a popular state estimation technique that is utilized in a variety of applications, including positioning and navigation, sensor networks, battery management, etc.,” (Bai: Abstract) showcasing that it is well-known and in common use in the industry. As claims 8-12 and 14-18 contain this identical ineligible subject matter, they are also rejected. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-18 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Gaur et. al (“Video Flame and Smoke Based Fire Detection Algorithms: A Literature Review”). Regarding Claim 1, Gaur teaches a method of determining a propagation path of fire or smoke, the method comprising (Gaur: Fig. 4): PNG media_image1.png 832 1338 media_image1.png Greyscale receiving, by a processor, a plurality of image frames captured by an imaging device (Gaur: Fig. 4 (shown above)); determining, by the processor, a plurality of regions of interests (ROIs) in each of the plurality of image frames by: detecting, by the processor, one or more objects in each of the plurality of image frames (Gaur: 2.1 Smoke Detection); “In the second step, the connected component analysis is used for the detection of areas that have changed in the present frame for the background image and to locate ROI (regions of interest).” “The moving regions are segmented using an approximate median method and used for making cluster candidate smoke areas using fuzzy- C means.” “A combination of GMM (Gaussian mixture model) background subtraction and temporal difference technique is used to detect the candidate smoke regions.” and determining, by the processor, one or more masks based on motion detection and color segmentation for each of the one or more objects in each of the plurality of image frames (Gaur: 2.1 Smoke Detection); “Chen et al. [22] utilized motion and color rules for smoke detection.” “Spatial and temporal wavelet analysis, color segmentation using YCbCr color space and Weber contrast analysis is used for moving blob classification.” determining, by the processor, a class for each of the plurality of ROIs using a deep learning model, wherein the class is a fire class or a smoke class, and wherein the deep learning model is trained based on training data corresponding to fire and smoke (Gaur: Fig. 4 (shown above) and 3.2 The Dataset); “The proper class labeling of the images is also important for the preparation of the dataset. The most basic division would be fire and nonfire images. Other divisions can be flame, smoke, and nonfire images.” determining, by the processor, a direction of propagation path of fire or smoke based on a displacement in coordinates of a centroid of each of the plurality of ROIs in each of the plurality of image frames, wherein the displacement is computed for each of the plurality of frames in a time sequence of occurrence of the plurality of frames (Gaur: 2.1 Smoke Detection); “The motion of the centroid of the segmented region computes the flutter direction angle in this approach…the image frames sequence is considered as multidimensional volumetric data in this approach.” “The temporal and spatial coefficients information is taken into one model.” and rendering, by the processor, an output based on the detection of the class along with the direction of propagation path of fire or smoke (Gaur: Fig. 4 (shown above)). Regarding Claim 2, Gaur teaches the method of claim 1, wherein the output comprises displaying a bounding box determined based on the plurality of regions of interests and the class for each of the plurality of ROIs (Gaur: Fig. 4 (shown above) and 3.3 Flame Detection). “Zhang et al. [128] also focused on the fire localization problem, for it, they manually annotated present fire patches with bounding boxes of a certain dimension.” Explanation: Also supported by Fig. 4, which depicts localized fire/smoke regions passed to output modules. Regarding Claim 3, Gaur teaches the method of claim 2, comprises displaying the direction of propagation path of fire or smoke along with the bounding box based on the displacement in the coordinates of the centroid of the plurality of regions of interest in each of the plurality of image frames (Gaur: 2.1 Smoke Detection). “The motion of the centroid of the segmented region computes the flutter direction angle in this approach.” Explanation: Direction and localization are displayed when centroid motion is computed for segmented ROIs. Regarding Claim 4, Gaur teaches the method of claim 3, comprises determining a rate of propagation of fire or smoke determined based on the displacement in the coordinates of the centroid of the plurality of ROIs in each of the plurality of image frames, wherein the displacement in the coordinates of the centroid is determined using a Kalman filter (Gaur: 2.1 Smoke Detection). “Ma et al. [35] proposed smoke detection using Kalman filtering and Gaussian color model.” Explanation: Kalman filtering estimates velocity and rate from centroid displacement across frames. Regarding Claim 5, Gaur teaches the method of claim 4, comprises determining a next coordinate of a centroid of each of the plurality ROIs in a subsequent image frame to be captured based on the determination of the direction of propagation path of fire or smoke using the Kalman filter (Gaur: 2.1 Smoke Detection). “Offline samples are used to train the Gaussian color model and Kalman filtering with MHI (moving image history) analysis is used to detect the moving object, in this case, the smoke.” Explanation: A Kalman filter predicts next-state coordinates by definition. Regarding Claim 6, Gaur teaches the method of claim 5, comprises: generating, by the processor, an alarm based on the detection of the class for each of the plurality of ROIs and the rate of propagation of fire or smoke (Gaur: Fig. 4 (shown above)); and outputting, by the processor, the predicted next coordinate of the centroid of each of the plurality of ROIs in the subsequent image frame to be captured by displaying an arrow smoke (Gaur: Fig. 4 (shown above)). Explanation: Directional display is shown in Fig. 4, which illustrates directional evacuation outputs. Regarding Claim 7, Gaur teaches all of the limitations of Claim 1 above because Claim 7 recites a system comprising instructions that cause processors to perform substantially the same functions as those of the method of Claim 1. Regarding Claim 8, Gaur teaches the system of claim 7, and additional limitations are met as in the consideration of claim 2 above. Regarding Claim 9, Gaur teaches the system of claim 7, and additional limitations are met as in the consideration of claim 3 above. Regarding Claim 10, Gaur teaches the system of claim 7, and additional limitations are met as in the consideration of claim 4 above. Regarding Claim 11, Gaur teaches the system of claim 7, and additional limitations are met as in the consideration of claim 5 above. Regarding Claim 12, Gaur teaches the system of claim 7, and additional limitations are met as in the consideration of claim 6 above. Regarding Claim 13, Gaur teaches all of the limitations of Claim 1 above because Claim 13 recites a non-transitory computer-readable medium comprising instructions that perform substantially the same functions as those of the method of Claim 1. Regarding Claim 14, Gaur teaches the non-transitory computer-readable medium of claim 13, and additional limitations are met as in the consideration of claim 2 above. Regarding Claim 15, Gaur teaches the non-transitory computer-readable medium of claim 13, and additional limitations are met as in the consideration of claim 3 above. Regarding Claim 16, Gaur teaches the non-transitory computer-readable medium of claim 13, and additional limitations are met as in the consideration of claim 4 above. Regarding Claim 17, Gaur teaches the non-transitory computer-readable medium of claim 13, and additional limitations are met as in the consideration of claim 5 above. Regarding Claim 18, Gaur teaches the non-transitory computer-readable medium of claim 13, and additional limitations are met as in the consideration of claim 6 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Gagliardi et. al (“A real-time video smoke detection algorithm based on Kalman filter and CNN”) describe a real-time video smoke detection approach that employs a CNN to classify smoke regions in image frames and applies Kalman filtering to track the temporal motion of detected smoke across successive frames. Hwan (KR101869442B1) discloses a camera-based fire detection apparatus that identifies candidate smoke regions using background modeling and image processing and classifies the regions using a CNN to distinguish smoke from non-smoke objects. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM ADU-JAMFI whose telephone number is (571)272-9298. The examiner can normally be reached M-T 8:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM ADU-JAMFI/Examiner, Art Unit 2677 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Jan 17, 2024
Application Filed
Dec 29, 2025
Non-Final Rejection — §101, §102 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month