Prosecution Insights
Last updated: April 19, 2026
Application No. 17/766,439

IMAGING SYSTEM AND METHOD OF USE THEREOF

Non-Final OA §101§103
Filed
Apr 04, 2022
Examiner
PARK, EDWARD
Art Unit
2675
Tech Center
2600 — Communications
Assignee
New York Stem Cell Foundation Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
576 granted / 704 resolved
+19.8% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
731
Total Applications
across all art units

Statute-Specific Performance

§101
16.9%
-23.1% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
21.3%
-18.7% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 704 resolved cases

Office Action

§101 §103
REASONS FOR ALLOWANCE Contents TOC \o "1-3" \h \z \u Notice of Pre-AIA or AIA Status PAGEREF _Toc225327379 \h 2 Election/Restrictions PAGEREF _Toc225327380 \h 2 Claim Rejections - 35 USC § 101 PAGEREF _Toc225327381 \h 2 Claim Rejections - 35 USC § 103 PAGEREF _Toc225327382 \h 3 Conclusion PAGEREF _Toc225327383 \h 9 Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Election/Restrictions Applicant’s election without traverse of Group I, claims 1-12, 70 in the reply filed on 1/16/26 is acknowledged. Claims 1-12, 18-24, 70 are currently pending. Claims 18-24 are withdrawn from consideration. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12, 70 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter as follows. Claim 1 recites an abstract idea in the form of a mental process. The claim does not integrate an abstract idea into a practical application nor recite an inventive concept. For claims 2-12, 70, further add refinements to the abstract idea without a practical application or inventive concept. Thus, the listed claims are considered non statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim s 1 - 5, 10 are rejected under 35 U.S.C. 103 as being unpatentable over Held et al (CV: “ Learning to Track at 100 FPS with Deep Regression Networks ”) in view of De Brouwer et al (US 2018/0289334 A1). Regarding claim 1 , Held teaches a n imaging system comprising: i) generate a plurality of chronological images of an image area via the imaging device (see 3.1, 3.4; we feed frames of a video into a neural network, and the network successively outputs the location of the tracked object in each frame ) ; ii) identify a target object within the image area of a most recent image of the plurality of chronological images (see 3.2; The goal of the network is then to regress to the location of the target object within the search region. ) ; iii) generate a target object image area within the image area of the most recent image including the identified target object, the target object area having a perimeter within the image area of the most recent image (see 3.2; The network outputs the coordinates of the object in the current frame, relative to the search region. The network’s output consists of the coordinates of the top left and bottom right corners of the bounding box. ) ; iv) use a prior image of the image area, and crop the prior image to generate a cropped image area sized to the perimeter of the target object image area (see 3.2; We crop the current frame using the search region and input this crop into our network, as shown in Figure 2. The goal of the network is then to regress to the location of the target object within the search region. In more detail, the crop of the current frame t is centered at c = ( cx,cy ), where c is the expected mean location of the target object. We set c= c, which is equivalent to a constant position motion model, although more sophisticated motion models can be used as well. The crop of the current frame has a width and height of k2 w and k2h, respectively, where w and h are the width and height of the predicted bounding box in the previous frame, and k2 defines our search radius for the target object. In practice, we use k1 = k2 = 2. As long as the target object does not become occluded and is not moving too quickly, the target will be located within this region. For fast-moving ob jects, the size of the search region could be increased, at a cost of increasing the complexity of the network. Alternatively, to handle long-term occlusions or large movements, our tracker can be combined with another approach such as an online-trained object detector, as in the TLD framework [19], or a visual attention model [4,29,2]; we leave this for future work. ) ; v) generate a location region of the cropped image area within the image area of the most recent image (see 3.2; In more detail, the crop of the current frame t is centered at c = ( cx,cy ), where c is the expected mean location of the target object. We set c= c, which is equivalent to a constant position motion model, although more sophisticated motion models can be used as well. The crop of the current frame has a width and height of k2 w and k2h, respectively, where w and h are the width and height of the predicted bounding box in the previous frame, and k2 defines our search radius for the target object. In practice, we use k1 = k2 = 2. ) ; and vi) analyze the location region of the most recent image (see 3.4; During test time, we initialize the tracker with a ground-truth bounding box from the first frame, as is standard practice for single-target tracking. At each subsequent frame t, we input crops from frame t−1 and frame t into the network (as described in Section 3.2) to predict where the object is located in frame t. We continue to re-crop and feed pairs of frames into our network for the remainder of the video, and our network will track the movement of the target object throughout the entire video sequence. ). Held does not teach a) an imaging device; and b) a controller in operable connection to the imaging device, the controller being operable to generate images via the imaging device, and analyze the generated images via a processor, wherein the processor includes functionality to . De Brouwer, in the same field of endeavor, teaches a) an imaging device (see 0025; camera) ; and b) a controller in operable connection to the imaging device, the controller being operable to generate images via the imaging device, and analyze the generated images via a processor, wherein the processor includes functionality to (see 0026; processor) . It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Held to utilize the cited limitations as suggested by De Brouwer . The suggestion/motivation for doing so would have been to enhance the accuracy, efficiency and reliability of the system (see 0006) . Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Held , while the teaching of De Brouwer continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Regarding claim s 2-5, 10 , Held teaches i)-vi) are iterated for each successive image of the plurality of chronological images (see 3.1, 3.4); greater than 10, 100, 1,000, 10,000, 100,000 or more individual images (see 5.1); i)-vi) are iterated when only one target object is identified in the image are a (see intro, 3.3, 3.4); identifying the target object in the location region of the most recent image (see 3.2, 6.2); i)-vi) are performed via one or more convolutional neural networks (CNNs) (see 3.3). Claim s 6-9 , 70 are rejected under 35 U.S.C. 103 as being unpatentable over Held et al (CV: “ Learning to Track at 100 FPS with Deep Regression Networks ”) with De Brouwer et al (US 2018/0289334 A1), and further in view of Sato et al (US 2013/0217061 A1). Regarding claim s 6-9 , Held with De Brouwer teaches all elements as mentioned above in claim 5. Held with De Brouwer does not teach expressly analyzing the target object ; classifying the target object based on an attribute of the target object ; attribute is a physical feature of the target object ; physical feature is size or shape . Sato, in the same field of endeavor, teaches analyzing the target object (see abstract, 0105-0107); classifying the target object based on an attribute of the target object (see 0105-0107, 0093); attribute is a physical feature of the target object (see abstract, 0093); physical feature is size or shape (see 0093). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Held with De Brouwer to utilize the cited limitations as suggested by Sato . The suggestion/motivation for doing so would have been to optimally detect commonalities (see 0007) . Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Held with De Brouwer, while the teaching of Sato continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Regarding claim 70 , Held with De Brouwer teaches all elements as mentioned above in claim 1. Held with De Brouwer does not teach expressly functionality to recursively process images until the respective most recent image corresponds to a desired temporal snapshot of the image area . Sato, in the same field of endeavor, teaches functionality to recursively process images until the respective most recent image corresponds to a desired temporal snapshot of the image area (see 0093, abstract). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Held with De Brouwer to utilize the cited limitations as suggested by Sato . The suggestion/motivation for doing so would have been to optimally detect commonalities (see 0007). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Held with De Brouwer, while the teaching of Sato continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Claim s 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Held et al (CV: “ Learning to Track at 100 FPS with Deep Regression Networks ”) with De Brouwer et al (US 2018/0289334 A1), and further in view of Floto et al (WO 2018/223142 A1). Regarding claim s 11-12 , Held with De Brouwer teaches all elements as mentioned above in claim 5. Held with De Brouwer does not teach expressly target object is a cell or cell colony ; cells of the cell colony are monoclonal . Floto , in the same field of endeavor, teaches target object is a cell or cell colony (see abstract); cells of the cell colony are monoclonal (see abstract). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Held with De Brouwer to utilize the cited limitations as suggested by Floto . The suggestion/motivation for doing so would have been to reduce storage size of images (see pg. 11) . Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Held with De Brouwer, while the teaching of Floto continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Conclusion Claims 1-12, 70 are rejected. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD PARK . The examiner’s contact information is as follows: Telephone: (571)270-1576 | Fax: 571.270.2576 | Edward.Park@uspto.gov For email communications, please notate MPEP 502.03, which outlines procedures pertaining to communications via the internet and authorization. A sample authorization form is cited within MPEP 502.03, section II. The examiner can normally be reached on M-F 9-6 CST . If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer , can be reached on (571) 272- 9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD PARK/ Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Apr 04, 2022
Application Filed
Mar 26, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602911
SYSTEMS AND METHODS FOR HANDWRITING RECOGNITION USING OPTICAL CHARACTER RECOGNITION
2y 5m to grant Granted Apr 14, 2026
Patent 12602815
WEAKLY PAIRED IMAGE STYLE TRANSFER METHOD BASED ON POSE SELF-SUPERVISED GENERATIVE ADVERSARIAL NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12597173
AUTOMATIC GENERATION OF AN IMAGE HAVING AN ATTRIBUTE FROM A SUBJECT IMAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12594023
METHOD AND DEVICE FOR PROVIDING ALOPECIA INFORMATION
2y 5m to grant Granted Apr 07, 2026
Patent 12592000
SYSTEMS AND METHODS FOR PROCESSING DIGITAL IMAGES TO ADAPT TO COLOR VISION DEFICIENCY
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+18.4%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 704 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month