Prosecution Insights
Last updated: April 19, 2026
Application No. 18/132,495

SYSTEMS AND METHODS OF INDIVIDUAL ANIMAL IDENTIFICATION

Final Rejection §103
Filed
Apr 10, 2023
Examiner
AZIMA, SHAGHAYEGH
Art Unit
2671
Tech Center
2600 — Communications
Assignee
406 Bovine Inc.
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
286 granted / 350 resolved
+19.7% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
36 currently pending
Career history
386
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 350 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is in response to the applicant's communication filed on 01/23/2026. In virtue of this communication, claims 1, 4-9, 11-13, 15-16, 19-21 filed on 01/23/2026 are currently pending in the instant application. Claims 2-3, 10, 14-15, 17-18 have been cancelled in a preliminary amendment filed on 09/27/2024. Response to Arguments Applicant's arguments filed 01/23/2026 have been fully considered: With regard to rejection under 35 USC 112 (b), the rejection is withdrawn in view of amendment filed on 01/23/2026. With regard to prior art rejection, the arguments are not persuasive. Please see below for the response. Applicant’s Argument I: on pages 6-10, Applicant argued, “Shmigelsky does not teach or suggest capturing video from multiple perspectives during an animal registration phase…The combination of Rooyakkers and Shmigelsky does not remedy this deficiency. Rooyakkers is directed to pet identification using facial recognition with still images, not video capture from multiple perspectives during registration.”, further “The use of video from multiple perspectives during registration improves the quality of the data vector generated for each known animal, enhancing subsequent identification accuracy.”, “While the Examiner proposes combining Shmigelsky's video capture from multiple perspectives with Rooyakkers' pet registration system, such a combination would import Shmigelsky's identification-phase video capture, not registration-phase video capture as claimed. .. these serve different purposes in the respective systems. Accordingly, neither Rooyakkers nor Shmigelsky, alone or in combination, teaches or suggests the limitations of pending claim 1” Examiner Answer I: Examiner respectfully disagrees, Examiner notes, In response to applicant's arguments against the references individually, one cannot show non-obviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Examiner notes prior art Rooyakkers, ¶ [0049] discloses the owner of the pet providing images (more than one image) during the registration process. Figure 10, ¶[0094] discloses capturing images of pet and the registration process. And prior art Shmigelsky, ¶[0217] cited to discloses capture images or video clips of a herd at the site from various viewing angles such as front, side, top, rear, and the like..., which the combination would show that one or more images in the registration process can be from different perspective. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Shmigelsky technique of animal visual identification into Rooyakkers technique to provide the known and expected uses and benefits of Shmigelsky technique over matching an animal to an existing animal images technique of Rooyakkers. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Shmigelsky to Rooyakkers in order to efficiently identifying, tracking, and managing animals. (Refer to Shmigelsky ¶[0017].) Applicant’s Argument II : on pages 6-10, Applicant argued, “Accordingly, claims 11, 12, and 21 are patentable over the cited references for the additional reasons that the references fail to teach or suggest capturing three-dimensional orientation data or depth data of a known animal during a registration phase for the purpose of generating a data vector for that animal.” Examiner Answer II: Examiner respectfully disagrees, Examiner notes In response to applicant's arguments against the references individually, one cannot show non-obviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Examiner notes prior art Rooyakkers, ¶ [0049] discloses the owner of the pet providing images (more than one image) during the registration process. Figure 10, ¶ [0094] discloses capturing images of pet and the registration process. Further prior art Borchersen, ¶[0004] cited to disclose Each laser beam measures intensity, horizontal, vertical, and depth dimensions, and by combining the measurements, the system composes a very accurate three-dimensional image of the animal. ¶[0260] discloses the said image with a corresponding feature and/or feature vector obtained from said reference images, such features and/or feature vector may comprise or be based on values of the area of multiple layers of said 3D-image and/or values selected from the group of topographic profile of the animal, such as the height of the animal, the broadness of the animal, contour line along the backbone of the animal, the length of the back, contour plots for different heights of the animal, volume of the animal above different heights of the animal, size of cavities, depth of cavities, the distance between two pre-selected points at the animal. Further ¶[0274] discloses using a range camera comprising a depth sensor. Examiner notes the combination of the Rooyakkers and Borchersen discloses capturing three-dimensional orientation data or depth data of a known animal during a registration phase. As for other claims, Applicant provided same arguments, Examiner respectfully disagrees and provides similar rationale as indicated above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 4, 5-9, 13, 15, 16, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rooyakkers et al. (US 2015/0131868), in view of Shmigelsky et al. (WO 2022/077113.). As per claim 1, An animal identification method comprising: an animal registration phase including: (Rooyakkers, Figure 10, paragraph [0094] discloses the method of registering a pet) “capturing with a device, a images of a first known animal, (Rooyakkers, ¶[0049] discloses the owner of the pet providing images (more than one image) during the registration process, furthermore, Figure 10, paragraph [0094] discloses the registration process.) “determining a first data vector based on the images of the first known animal with a model, receiving, on the device, a first set of identifying information of the first known animal” (Rooyakkers, ¶ [0094] discloses the facial component being detected in the image and non-biometric meta data are received including owner information as well as other pet information such as disclosed in ¶ [0063] such as eye color, fur color, size, breed information, furthermore. Furthermore, ¶ [0049] discloses a profile is generated from the image and metadata information 102. As described further below, when generating the profile information, the image data may be processed in order to transform it into a normalized version, which may also be stored within the profile. The generated profile is stored in a data source 106, such as a database of profiles. ¶ [0063] discloses features represented as a vector.) “saving the first data vector and the first set of identifying information for the first known animal to a database,” (Rooyakkers, ¶ [0049] discloses the generated profile is stored in a data source 106, such as a database of profiles. Furthermore, see Figure 10, ¶ [0094]) “capturing, with the device, an images of a second known animal, determining a second data vector based on the images of the second known animal with the model, receiving, on the device, a second set of identifying information of the second known animal, and saving the second data vector and the second set of identifying information for the second known animal to the database;” (Rooyakkers, ¶ [0047] discloses there are plurality of pet profile registered, ¶ [0094] When registering the biometric data with the pet identification functionality, it may be stored in a store of profiles or biometric data of the registered pets. Furthermore, the steps disclosed above would be repeated for all pets being registered please see ¶s [0049], and [0094].) “and an animal identifying phase including: capturing, with the device, at least one image of an unidentified animal,” (Rooyakker, ¶ [0065], Figure 4, discloses method of matching image data to existing pet profiles, the remote device may capture or receive image of the pet) “determining a new data vector based on the at least one image of the unidentified animal with the model, comparing the new data vector to the first data vector and the second data vector,” (Rooyakker, paragraph [0070] discloses the profiles may be retrieved from a collection of profiles of pets that have been indicated as being lost, from the entire collection of registered profiles, or from other sources of pet profiles, the features calculated from the biometric image, in each profile is compared to that of located pet in order to determine a matching degree indicative of a similarity between the two. The matching may determine a Euclidean distance between one or more feature vectors of the biometric image of the pet profile and the same one or more feature vectors of the image of the located pet (416).) “and identifying the unidentified animal as the first known animal or the second known animal,” (Rooyakkers, ¶ [0071] disclose once the degree of matching is determined for each profile, the profiles may be filtered based on the determined Euclidean distance as well as other metadata in the profiles and received metadata (418). The results may be filtered so that only those results are returned that have a degree of matching above a certain threshold. For example, only those profiles that were determined to be within a certain threshold distance of each other may be returned. Rooyakkers does not explicitly disclose the following which would have been obvious in view of Shmigelsky from similar field of endeavor “capturing with a device, a video of a first and second known animals at a first perspective and a second perspective” (Shmigelsky, [0217] discloses commands the imaging devices 108 to capture images or video clips of a herd at the site 102 from various viewing angles such as front, side, top, rear, and the like, and starts to receive the captured images or video clips from the imaging devices 108 (step 3705). The images may be captured from any angle relative to one or more animals and over various distances.) “and displaying on the device the identified animal including the corresponding set of identifying information.” (Shmigelsky,¶[0311] discloses showing data of a specific identified animal at the site 102 where the data has been measured and/or otherwise determined through using one or more AI models of the AI pipeline. In at least one embodiment, the example GUIs 762, 764 and 766 may include an ID portion 770 identifying the specific animal (e.g., Cow 07), and an image portion 772 where one or more images or an image and a video of the specific animal may also be displayed such as a still image and a 360-degree live video. The displayed data of the specific animal may include statistical data 774 of the animal. Further, the displayed data may include an animal assessment portion 776 to display one or more assessments that were performed for the identified animal such as, but not limited to, one or more of lameness, temperature, and potential illnesses. ) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Shmigelsky technique of animal visual identification into Rooyakkers technique to provide the known and expected uses and benefits of Shmigelsky technique over matching an animal to an existing animal images technique of Rooyakkers. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Shmigelsky to Rooyakkers in order to efficiently identifying, tracking, and managing animals. (Refer to Shmigelsky ¶[0017].) As per claim 4 in view of claim 1, Rooyakkers as modified by Shmigelsky, “wherein the animal identifying phase further includes receiving feedback on the device regarding the accuracy of identifying the unidentified animal as the identified animal.” (Rooyakkers, Figure 10, ¶ [0094] discloses receiving the inputs regarding the accuracy of features, furthermore discloses . The owner of the pet may review the displayed location of the facial components and determine if the components are well positioned (1006). If the components are not well positioned (No at 1006), the positions of the detected facial components may be manually adjusted (1008).) As per claim 5, in view of claim 1, Rooyakkers as modified by Shmigelsky discloses “wherein the animal is a cow, a sheep, a horse, a pig, a goat, a chicken, a dog, or a cat.” (Rooyakkers, ¶ [0046] discloses animal being dog or cat. Furthermore, ¶ [0057] discloses the animal being any other animals.) As per claim 6, in view of claim 1, “wherein the first set of identifying information includes an ear tag, a lot ID, a sex, a note, a feed performance, a lameness, a sickness, an antibiotic status, a pen movement, or any combination thereof.” (Shmigelsky, ¶[0434] discloses that displays the IDs (for example, drop tag IDs) corresponding to the animals listed. In at least one embodiment, the image windows 4056 and the ID list 4058 may enable indexing into videos by allowing a user to click on the image or ID of a listed animal to retrieve videos or images that include the listed animal. In at least one embodiment, a retrieved video of a listed animal may jump to the time instance of when that animal is visible in the video. Other data that may be listed include the gender, age, breed, herd affiliation, fertility status and/or calving date. ) As per claim 7, in view of claim 1, Rooyakkers as modified by Shmigelsky discloses “wherein the model is a machine learning model.” (Rooyakkers, ¶ [0021] discloses the model being SVM machine learning model) As per claim 8, in view of claim 5, Rooyakkers as modified by Shmigelsky discloses “wherein the model is trained with a set of augmented data.” (Rooyakkers, ¶ [0059] discloses For example, the captured image 302 may be rotated, scaled and cropped in order to generate an image 312 of a predefined size and having the facial components in a specified alignment and orientation.) As per claim 9, in view of claim 6, Rooyakker as modified by Shmigelsky discloses “wherein the set of augmented data includes a rotated image, a scaled image, a flipped image, a brightness adjusted image, a generative image, or any combination thereof.” (Rooyakkers, ¶ [0059] discloses For example, the captured image 302 may be rotated, scaled and cropped in order to generate an image 312 of a predefined size and having the facial components in a specified alignment and orientation.) As per claim 13, in view of claim 1, Rooyakkers as modified by Shmigelsky discloses “wherein determining the new data vector based on the at least one image of the unidentified animal includes” (Rooyakkers, ¶ [0040] discloses receive an initial image of the animal captured at a remote device; process the initial image to identify facial component locations including at least two eyes; and normalize the received initial image based on the identified facial component locations to provide the image.) As per claim 15, in view of claim 5, “wherein the model is trained with a triplet loss technique where a reference input image is compared to a matching input image and a non-matching input image to minimize the difference between the reference input image and the matching input image and maximize the distance between reference input image and the non-matching input image.” (Shmigelsky, ¶[0243] disclose Referring again to FIG. 6, at step 312, one or more sections of the animal 104 may be uniquely identified using a suitable method such as a Triplet-Loss Siamese Network method or the like, based on the placed key points. ¶[0263] discloses When there is enough data, for all the animals in the herd at the site, a new embedding model (i.e., an ID model) is trained which may be done using a Triplet Loss Siamese Network. Further ¶[0349-0355] discloses an architecture for a Face ID model 3445 that uses a Triplet Siamese neural network. For example, the Face ID model may comprise a Siamese Network and a concatenated classification network. The Siamese Network is trained using a triplet loss training method. In a triplet loss method, the Siamese Network is trained to generate embeddings using an anchor image, a positive image and a negative image. An embedding is a vector of numbers, ranging to 128 to 1024 in size, depending on the body part. Training may be done in two parts, where first the Siamese network is primed and trained to generate embeddings, and then second the Siamese Network model is frozen. The purpose of a Triplet Siamese Network is to learn feature similarity between images that have been extracted using a Convolutional Neural Network (CNN) 3464 and to express these learned features into an embedding space minimizing the Triplet Loss. The CNN 3464 generates embeddings 3465 based on normalized images received from the preprocessing 3462. As previously described, the CNN 3464 is trained to minimize the Triplet loss by iteratively adjusting generated embeddings 3465 to reduce distance between images belonging to the same class and to increase distance between images belonging to different classes which means mapping images that belong to the same class in a closer region in the embedding space and mapping images that belong to different classes further away. ) As per claim 16, in view of claim 1, Rooyakkers as modified by Shmigelsky discloses “wherein determining the first data vector includes generating at least one transformation of the first known animal face in the video.” (Rooyakkers, ¶[0049] discloses performing the transformation on the image of the animal, furthermore, furthermore, please see ¶ [0059], Tao, ¶ [0033] discloses a video stream having images of animals from different perspective.) As per Claim 19, in view of claim 1, Rooyakkers as modified by Shmigelsky discloses “wherein capturing the video of the first known animal includes capturing a face of the first known animal with a bounding box delineated on the device.” (Shmigelsky ,¶[0226], figure. 9, bounding boxes 386A and 386B, further ¶[0264] discloses The DropTag ID is used as the animal ID for the bounding boxes of the head, body, tail, etc. of the animal. For example, the DropTag ID is assigned to the Face bounding box. The key points for the face bounding box are predicted and normalized as described previously. This is generally done for substantial number of images (e.g., 100 images) for each animal. Fig. 26 A, bounding box 744.) Claim(s) 11, 12, and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rooyakkers et al. (US 2015/0131868), in view of Shmigelsky et al. (WO 2022/077113), further in view of Borchersen et al. (US 2020/0143157.) As per claim 11, in view of claim 1, Rooyakkers as modified by Shmigelsky does not explicitly disclose the following which would have been obvious in view of Borchersen from similar field of endeavor “wherein capturing the video of the first known animal includes capturing three-dimensional orientation data of the first known animal in the video.”(Borchersen, ¶[0004] discloses Each laser beam measures intensity, horizontal, vertical, and depth dimensions, and by combining the measurements, the system composes a very accurate three-dimensional image of the animal ¶[0259] discloses The method according to any of the items 1 to 3 wherein said image and said reference image are topographic images of the back of the animals, such as 3D images e.g. multiple layers of 3D-images. ¶[0260] discloses the said image with a corresponding feature and/or feature vector obtained from said reference images, such features and/or feature vector may comprise or be based on values of the area of multiple layers of said 3D-image and/or values selected from the group of topographic profile of the animal, such as the height of the animal, the broadness of the animal, contour line along the backbone of the animal, the length of the back, contour plots for different heights of the animal, volume of the animal above different heights of the animal, size of cavities, depth of cavities, the distance between two pre-selected points at the animal.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Borchersen technique of identifying images of animals into Rooyakkers as modified by Shmigelsky technique to provide the known and expected uses and benefits of Borchersen technique over matching an animal to an existing animal images technique of Rooyakkers as modified by Shmigelsky. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Borchersen to Rooyakkers as modified by Shmigelsky in order to accurately monitor each individual animals. (Refer to Borchersen ¶[0006].) As per claim 12, in view of claim 1, Rooyakkers as modified by Shmigelsky does not explicitly disclose the following which would have been obvious in view of Borchersen from similar field of endeavor “wherein capturing the video of the first known animal includes capturing a depth of the first known animal in the video.” (Borchersen, ¶[0004] discloses Each laser beam measures intensity, horizontal, vertical, and depth dimensions, and by combining the measurements, the system composes a very accurate three-dimensional image of the animal ¶[0260] discloses the said image with a corresponding feature and/or feature vector obtained from said reference images, such features and/or feature vector may comprise or be based on values of the area of multiple layers of said 3D-image and/or values selected from the group of topographic profile of the animal, such as the height of the animal, the broadness of the animal, contour line along the backbone of the animal, the length of the back, contour plots for different heights of the animal, volume of the animal above different heights of the animal, size of cavities, depth of cavities, the distance between two pre-selected points at the animal. Further ¶[0274] discloses using a range camera comprising a depth sensor.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Borchersen technique of identifying images of animals into Rooyakkers as modified by Shmigelsky technique to provide the known and expected uses and benefits of Borchersen technique over matching an animal to an existing animal images technique of Rooyakkers as modified by Shmigelsky. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Borchersen to Rooyakkers as modified by Shmigelsky in order to accurately monitor each individual animals. (Refer to Borchersen ¶[0006].) As per claim 21, in view of claim 1, Rooyakkers as modified by Shmigelsky does not explicitly disclose the following which would have been obvious in view of Borchersen from similar field of endeavor “wherein determining the first data vector based on the video of the first known animal includes utilizing three-dimensional orientation or depth data from the video.” (Borchersen, ¶[0004] discloses Each laser beam measures intensity, horizontal, vertical, and depth dimensions, and by combining the measurements, the system composes a very accurate three-dimensional image of the animal. ¶[0131], discloses when comparing at least one feature from at least one image with at least one corresponding feature from at least one reference image the processing means may determine and compare areas of layers of 3D-images. Such areas may be part of feature vectors or may constitute features for e.g. sequential comparing at least one image with at least one reference image. ¶[0260] discloses data extracted from said reference image is performed by comparing at least one feature and/or at least one feature vector obtained from the said image with a corresponding feature and/or feature vector obtained from said reference images, such features and/or feature vector may comprise or be based on values of the area of multiple layers of said 3D-image) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Borchersen technique of identifying images of animals into Rooyakkers as modified by Shmigelsky technique to provide the known and expected uses and benefits of Borchersen technique over matching an animal to an existing animal images technique of Rooyakkers as modified by Shmigelsky. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Borchersen to Rooyakkers as modified by Shmigelsky in order to accurately monitor each individual animals. (Refer to Borchersen ¶[0006].) Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rooyakkers et al. (US 2015/0131868), in view of Shmigelsky et al. (WO 2022/077113), further in view of Hayes (US Patent No. 4,745,472.) As per claim 20, in view of claim 1, Rooyakkers as modified by Shmigelsky does not explicitly disclose the following which would have been obvious in view of Hayes from similar field of endeavor “wherein capturing the video of the first known animal is while the first known animal is in an animal retaining mechanism; and wherein the device is movable with respect to the animal retaining mechanism.” (Hayes, Col. 1, lines 50-55 discloses the portable measurement system comprises a special chute apparatus for holding the animal during measurement, a pair of portable television cameras and a video tape system for recording the measurement data on video tape which can be sent to a central computer processing station for providing the desired information about a particular animal.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Hayes technique of animal measuring into Rooyakkers as modified by Shmigelsky technique to provide the known and expected uses and benefits of Hayes technique over matching an animal to an existing animal images technique of Rooyakkers as modified by Shmigelsky. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Hayes to Rooyakkers as modified by Shmigelsky in order to accurately evaluating the physical and characteristic of animals. (Refer to Hayes Col. 1, lines 7-8.) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAGHAYEGH AZIMA whose telephone number is (571)272-1459. The examiner can normally be reached Monday-Friday, 9:30-6:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at (571)272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAGHAYEGH AZIMA/Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Apr 10, 2023
Application Filed
Sep 27, 2024
Response after Non-Final Action
Jul 22, 2025
Non-Final Rejection — §103
Jan 23, 2026
Response Filed
Mar 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586350
DETERMINING AUDIO AND VIDEO REPRESENTATIONS USING SELF-SUPERVISED LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12573209
ROBUST INTERSECTION RIGHT-OF-WAY DETECTION USING ADDITIONAL FRAMES OF REFERENCE
2y 5m to grant Granted Mar 10, 2026
Patent 12561989
VEHICLE LOCALIZATION BASED ON LANE TEMPLATES
2y 5m to grant Granted Feb 24, 2026
Patent 12530867
Action Recognition System
2y 5m to grant Granted Jan 20, 2026
Patent 12525049
PERSON RE-IDENTIFICATION METHOD, COMPUTER-READABLE STORAGE MEDIUM, AND TERMINAL DEVICE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+11.4%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 350 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month