Prosecution Insights
Last updated: April 19, 2026
Application No. 18/318,444

SMART CAGE SYSTEM AND METHOD FOR HOUSING AND ASSAYING MULTIPLE VERTEBRATE ANIMALS

Final Rejection §103
Filed
May 16, 2023
Examiner
ALEKSIC, NEVENA
Art Unit
3647
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Olden Labs Pbc
OA Round
3 (Final)
74%
Grant Probability
Favorable
4-5
OA Rounds
2y 5m
To Grant
83%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
78 granted / 105 resolved
+22.3% vs TC avg
Moderate +9% lift
Without
With
+9.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
24 currently pending
Career history
129
Total Applications
across all art units

Statute-Specific Performance

§101
1.3%
-38.7% vs TC avg
§103
49.4%
+9.4% vs TC avg
§102
23.2%
-16.8% vs TC avg
§112
24.9%
-15.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 105 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 19, 2026 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Salem et al. (US 2016/0150758 A1), hereinafter Salem, in view of Saliu et al. (US 2023/0045152 A1), hereinafter Saliu. Regarding claim 1, Salem discloses a smart cage system for housing and assaying multiple vertebrate animals comprising: at least one inner housing assembly (cage 20, fig.7) and at least one outer housing assembly (Para. [0038], “[t]he SCORHE 10 is configured and dimensioned to enclose or engage a home cage 20 for monitoring mice living therein”; fig. 1), the housing assemblies each having a top portion (inner housing assembly: as shown in fig. 7, cage 20 has a top surface; outer housing assembly: as shown in figs. 5A-5B & fig. 6, SCHORE enclosure 100 may sit on the shelves 302 of the rack system 300), a bottom portion (inner housing assembly: as shown in fig. 7, cage 20 has a bottom surface 26; outer housing assembly: as shown in figs. 5A-5B & fig. 6, the SCORHE enclosure 100 includes a bottom surface 141), and at least one side portion (inner housing assembly: as shown in fig. 7, cage 20 has one or more walls 24; outer housing assembly: as shown in figs. 5A-5B & fig. 6, the SCORHE enclosure includes panel assemblies 142), the inner housing assemblies adapted to be at least partly disposed within and removable from the outer housing assemblies (Para. [0046], “the door assembly 160 of the SCORHE enclosure 100 includes a hinged door 102 with a magnetic latch 104 to allow for insertion and removal of the home cage 20 from the rack 300 without removing the enclosure from the rack system”); the smart cage including at least one controller (computing device 202, fig. 1) adapted to monitor and record data from two sensors from a group of: optical sensors, motion sensors, pressure sensors, weight sensors, temperature sensors, humidity sensors, proximity sensors, chemical sensors, volume sensors, level sensors, audio sensors, odor sensors, heartbeat sensors, brainwave sensors, bite force sensors, body mass sensors, color sensors, rotary sensors, light sensors, oscillation sensors, balance sensors, reflex or reaction sensors, waterflow sensors, force meter sensors, load sensors, electrical sensors, and bite strength sensors1 (optical sensor chosen; Para. [0044], “each SCORHE enclosure includes…at least one port and/or wire to transmit the output from the dual-camera system 120 to the computing device 202, as shown in Fig. 1, via a wired or wireline connection. Furthermore, Para. [0068] discloses “the video data is recorded and stored at a video recorder device 208, including digital video and audio for subsequent processing”; as shown in fig. 5A there is a rear camera 122B and a front camera 122A), the sensors adapted to monitor at least one or more of the environment (Para. [0047], “the cameras 122A-B are suitable for performing video analysis when two or more mice are housed in the cage 20”), and multiple vertebrate animals within the inner housing assembly (mice 400A-B; figs. 8A & 8B); the two or more sensors operationally synchronized at least one or more of before, in real time, and after sensing an action, wherein data captured by each sensor can be synchronized by way of at least one time measuring device (Paras. [0094-95], a time stamp is required to assess the data of the dual camera system); a multi object tracking software system (Para. [0075], SCORHE Video Processing application (SVPA) 204 includes the segmentation module 218 and the occlusion reasoning module 224) operationally coupled to at least two optical sensors by the at least one controller (Para. [0071], “the SCORHE 10 includes at least one computing device 202 executing a SCORHE Video Processing application (“SVPA) 204 for processing video data”), the multi object tracking software adapted to track individuals of the multiple vertebrate animals by way of at least two or more from a group of: object detection, object reidentification, generating trajectories, and aggregating features2 (object detection chosen; the segmentation module 218 identifies objects in each video output from the cameras 122A-B of the SCORHE enclosure 100. For example, the segmentation module 218 partitions each frame of the video outputs into separate objects, based at least in part on the color, intensity, or texture of pixels in the image within each frame”); and at least the outer housing assembly including at least one or more from a group of ports, slots, shelves, pockets, hooks, fasteners, and sleeves each adapted to retain at least one sensor3 (slot chosen; as shown in fig. 5A, the dual camera system 120 is retained in a slot of the SCORHE enclosure 100). However, Salem does not appear to specifically disclose wherein the multi object tracking software was further configured to use a plurality of models from a cross-checkable group of: appearance models, motion models, interaction models, exclusion models, and occlusion handling models, wherein the tracking of individual vertebrate animals is time-synchronized across multiple sensors and configured to maintain continuous identification of each animal, animal location data synced with sensor data to identify which individual animals produced what sensor data, the multi object tracking software further configured to cross-check tracking outputs from the plurality of models by comparing outputs of at least two of the appearance models, motion models, interaction models, exclusion models, or occlusion handling models to validate individual animal identification, and to cross-check the animal location data with the synchronized sensor data to enhance tracking accuracy in complex multi-animal scenarios. Saliu is in the field of a system to simultaneously track multiple organisms at high resolution (Abstract) and teaches wherein the multi object tracking software was further configured to use a plurality of models (as discussed in para. [0182-184], the projection process cross references a plurality of models, or in the alternative, at least the YOLO series discloses a plurality of models) from a cross-checkable group of4: appearance models (as discussed in Para. [0099], “unique identification” reads on appearance models), motion models (as discussed in para. [0102], the tracking of a moving object), interaction models (as shown in Figs. 4c to 4e), exclusion models, and occlusion handling models (as shown in Figs. 4c to 4e), wherein the tracking of individual vertebrate animals is time-synchronized across multiple sensors and configured to maintain continuous identification of each animal (Para. [0108], “[t]he object detection algorithm may also be used to identify multiple objects across multiple image frames acquired as a function of time, to enable object tracking as a function of time”. See also Para. [0102]), animal location data synced with sensor data to identify which individual animals produced what sensor data (Para. [0099], “the objects are classified 457 into categories, for example, by a detection algorithm such as convolutional neural network (CNN). For example, CNN can report a classification score for each object, in addition to the location and bounding box widths and heights. The classification score can be used to categorize each object”), the multi object tracking software further configured to cross-check tracking outputs from the plurality of models by comparing outputs of at least two of the appearance models, motion models, interaction models, exclusion models, or occlusion handling models configured to validate individual animal identification (fig. 24 represents a continuous tracking of the plurality of models (i.e., detecting the organisms) and, specifically, operation 2415 repeats the steps as a function of time for tracking), and to cross-check the animal location data with the synchronized sensor data to enhance tracking accuracy in complex multi-animal scenarios (Para. [0002], “[o]bserving such unconstrained movement and interaction is helpful for improving our understanding of organism behavior”). Furthermore, Saliu discloses wherein the two or more sensors operationally synchronized at least one or more of before, in real time, and after sensing an action, wherein data captured by each sensor can be synchronized by way of at least one time measuring device (Para. [0184], “[t]he CNN process can be applied to the captured images, such as to each camera image data in parallel, to create bounding box coordinates for detected objects for each camera. The bounding box coordinates can be aggregated, using inter-camera overlaps to reduce double-counting objects and to merge objects having portions in multiple neighbor cameras. The object detection CNN algorithms can additionally report a classification score for each object, in addition to the location and bounding box width/heights. The classification score can be used to categorize each object. Categorizations include unique identification of objects, or for unique identification of object type.”; see also fig. 24). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Salem such that the multi object tracking software was further configured to use a plurality of models from a cross-checkable group, and wherein the tracking of individual organisms is time-synchronized as taught by Saliu, in order to benefit from tracking multiple organisms at the same time. Regarding claim 2, Salem in view of Saliu discloses the invention in claim 1, and Salem further discloses wherein the two or more sensors are operationally coupled to the outer housing assembly (see dual camera system 120 on SCORHE enclosure 100, fig. 5A) wherein the inner housing assembly may be removed without removing the at least one or more sensors (Para. [0046], “the door assembly 160 of the SCORHE enclosure 100 includes a hinged door 102 with a magnetic latch 104 to allow for insertion and removal of the home cage 20 from the rack 300 without removing the enclosure from the rack system”; as shown in figs. 4 & 5A). Regarding claim 3, Salem in view of Saliu discloses the invention in claim 1, and Salem further discloses wherein at least one physiological software system is adapted (coordinate module 222, fig. 10), from the data gathered from the two or more sensors and a multi object tracking software system measures of each of the multiple vertebrate animals from at least one or more from a group of: lifespan, frailty index, muscle strength, run endurance, learning and memory, balance and coordination, body weight, food intake, total time spent in sleep and awake, temporal pattern of being asleep and awake, speed of nest building, visual acuity, hearing acuity, water intake, and coat color/density, position tracking, distance traveled, movement speed, sleep time, cardiovascular health, cognition, balance and coordination, tremors, gait deficiencies, vision movement, and speed of nest building5 (distance traveled and movement speed chosen; Para. [0081], “[t]he coordinate module 222 determines the physical coordinates of the objects identified by the segmentation module 218 in each frame. For objects in motion, the coordinate module 222 may use temporal data to determine the distance travelled by the object (i.e. mouse) as well as the speed of the object”). Regarding claim 17, Salem discloses a smart cage system for housing and assaying multiple vertebrate animals comprising: a plurality of inner housing assemblies and outer housing assemblies (fig. 4), the housing assemblies each having a top portion (inner housing assembly: as shown in fig. 7, cage 20 has a top surface; outer housing assembly: as shown in figs. 5A-5B & fig. 6, SCHORE enclosure 100 may sit on the shelves 302 of the rack system 300), a bottom portion (inner housing assembly: as shown in fig. 7, cage 20 has a bottom surface 26; outer housing assembly: as shown in figs. 5A-5B & fig. 6, the SCORHE enclosure 100 includes a bottom surface 141), and at least one side portion (inner housing assembly: as shown in fig. 7, cage 20 has one or more walls 24; outer housing assembly: as shown in figs. 5A-5B & fig. 6, the SCORHE enclosure includes panel assemblies 142), each inner housing assembly adapted to be at least partly disposed within and removable from the respective outer housing assembly (Para. [0046], “the door assembly 160 of the SCORHE enclosure 100 includes a hinged door 102 with a magnetic latch 104 to allow for insertion and removal of the home cage 20 from the rack 300 without removing the enclosure from the rack system”), the housing assemblies further disposed within at least one rack assembly adapted to hold the plurality of at least one housing assemblies wherein housing assemblies may be operationally contained at least one or more of vertically and horizontally from each other (double-bay rack 300, fig. 4), and wherein the housing assemblies may be at least partially removed from the rack independently from other housing assemblies (fig. 4); the plurality of smart cages including at least one controller (computing device 202, fig. 1) adapted to monitor and record data from two or more sensors from a group of: optical sensors, motion sensors, pressure sensors, weight sensors, temperature sensors, humidity sensors, proximity sensors, chemical sensors, volume sensors, level sensors, audio sensors, odor sensors, heartbeat sensors, brainwave sensors, body mass sensors, color sensors, rotary sensors, light sensors, oscillation sensors, balance sensors, reflex or reaction sensors, waterflow sensors, force meter sensors, load sensors, electrical sensors, and bite strength sensors6 (optical sensor chosen; Para. [0044], “each SCORHE enclosure includes…at least one port and/or wire to transmit the output from the dual-camera system 120 to the computing device 202, as shown in Fig. 1, via a wired or wireline connection. Furthermore, Para. [0068] discloses “the video data is recorded and stored at a video recorder device 208, including digital video and audio for subsequent processing”; as shown in fig. 5A there is a rear camera 122B and a front camera 122A), the sensors adapted to monitor at least one or more of the environment (Para. [0047], “the cameras 122A-B are suitable for performing video analysis when two or more mice are housed in the cage 20”), and at least one vertebrate animal within the inner housing assembly (one mouse 400B; figs. 8A & 8B), wherein each smart cage may have unique configurations of at least one or more sensors (cameras 122A-B, fig. 5A); the at least two or more sensors operationally synchronized at least one or more of before, in real time, and after sensing an action, wherein data captured by each at least one or more sensors can be synchronized by way of at least one time measuring device (Paras. [0094-95], a time stamp is required to assess the data of the dual camera system); a multi object tracking software system (Para. [0075], SCORHE Video Processing application (SVPA) 204 includes the segmentation module 218 and the occlusion reasoning module 224) operationally coupled to at least two optical sensors by the at least one controller (Para. [0071], “the SCORHE 10 includes at least one computing device 202 executing a SCORHE Video Processing application (“SVPA) 204 for processing video data”), the multi object tracking software adapted to track individuals of the multiple vertebrate animals by way of two or more from a group of: object detection, object reidentification, generating trajectories, and aggregating features7 (object detection chosen; the segmentation module 218 identifies objects in each video output from the cameras 122A-B of the SCORHE enclosure 100. For example, the segmentation module 218 partitions each frame of the video outputs into separate objects, based at least in part on the color, intensity, or texture of pixels in the image within each frame”); and at least the outer housing assembly including at least one or more from a group of ports, slots, shelves, pockets, hooks, fasteners, and sleeves each adapted to retain at least one sensor8 (slot chosen; as shown in fig. 5A, the dual camera system 120 is retained in a slot of the SCORHE enclosure 100). However, Salem does not appear to specifically disclose wherein the multi object tracking software was further configured to use a plurality of models from a cross-checkable group of: appearance models, motion models, interaction models, exclusion models, and occlusion handling, the tracking of individual vertebrate animals adapted to be time synchronized across multiple sensors and adapted to maintain continuous identification of each animal, animal location data synced with sensor data to identify which individual animals produced what sensor data, the multi object tracking software further configured to cross-check tracking outputs from the plurality of models by comparing outputs of at least two of the appearance models, motion models, interaction models, exclusion models, or occlusion handling models to validate individual animal identification, and to cross-check the animal location data with the synchronized sensor data to enhance tracking accuracy in complex multi-animal scenarios. Saliu is in the field of a system to simultaneously track multiple organisms at high resolution (Abstract) and teaches wherein the multi object tracking software was further configured to use a plurality of models (as discussed in para. [0182-184], the projection process cross references a plurality of models, or in the alternative, at least the YOLO series discloses a plurality of models) from a cross-checkable group of9: appearance models (as discussed in Para. [0099], “unique identification” reads on appearance models), motion models (as discussed in para. [0102], the tracking of a moving object), interaction models (as shown in Figs. 4c to 4e), exclusion models, and occlusion handling models (as shown in Figs. 4c to 4e), wherein the tracking of individual vertebrate animals is time-synchronized across multiple sensors and configured to maintain continuous identification of each animal (Para. [0108], “[t]he object detection algorithm may also be used to identify multiple objects across multiple image frames acquired as a function of time, to enable object tracking as a function of time”. See also Para. [0102]), animal location data synced with sensor data to identify which individual animals produced what sensor data (Para. [0099], “the objects are classified 457 into categories, for example, by a detection algorithm such as convolutional neural network (CNN). For example, CNN can report a classification score for each object, in addition to the location and bounding box widths and heights. The classification score can be used to categorize each object”), the multi object tracking software further configured to cross-check tracking outputs from the plurality of models by comparing outputs of at least two of the appearance models, motion models, interaction models, exclusion models, or occlusion handling models to validate individual animal identification (fig. 24 represents a continuous tracking of the plurality of models (i.e., detecting the organisms) and, specifically, operation 2415 repeats the steps as a function of time for tracking), and to cross-check the animal location data with the synchronized sensor data to enhance tracking accuracy in complex multi-animal scenarios (Para. [0002], “[o]bserving such unconstrained movement and interaction is helpful for improving our understanding of organism behavior”). Furthermore, Saliu discloses wherein the two or more sensors operationally synchronized at least one or more of before, in real time, and after sensing an action, wherein data captured by each sensor can be synchronized by way of at least one time measuring device (Para. [0184], “[t]he CNN process can be applied to the captured images, such as to each camera image data in parallel, to create bounding box coordinates for detected objects for each camera. The bounding box coordinates can be aggregated, using inter-camera overlaps to reduce double-counting objects and to merge objects having portions in multiple neighbor cameras. The object detection CNN algorithms can additionally report a classification score for each object, in addition to the location and bounding box width/heights. The classification score can be used to categorize each object. Categorizations include unique identification of objects, or for unique identification of object type.”; see also fig. 24). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Salem such that the multi object tracking software was further configured to use a plurality of models from a cross-checkable group, and wherein the tracking of individual organisms is time-synchronized as taught by Saliu, in order to benefit from tracking multiple organisms at the same time. Regarding claim 18, Salem in view of Saliu discloses the invention in claim 17, and Salem further discloses wherein the two or more sensors is operationally coupled to the outer housing assembly (see dual camera system 120 on SCORHE enclosure 100, fig. 5A) wherein the inner housing assembly may be removed without removing the two or more sensors (Para. [0046], “the door assembly 160 of the SCORHE enclosure 100 includes a hinged door 102 with a magnetic latch 104 to allow for insertion and removal of the home cage 20 from the rack 300 without removing the enclosure from the rack system”; as shown in figs. 4 & 5A). Regarding claim 19, Salem in view of Saliu discloses the invention in claim 17, and Salem further discloses wherein at least one physiological software system is adapted (coordinate module 222, fig. 10), from the data gathered from the at least one or more sensors and a multi object tracking software system measures of each of the multiple vertebrate animals from at least one or more from a group of: lifespan, frailty index, muscle strength, run endurance, learning and memory, balance and coordination, body weight, food intake, total time spent in sleep and awake, temporal pattern of being asleep and awake, speed of nest building, visual acuity, hearing acuity, water intake, and coat color/density, position tracking, distance traveled, movement speed, sleep time, cardiovascular health, cognition, balance and coordination, tremors, gait deficiencies, vision movement, and speed of nest building10 (distance traveled and movement speed chosen; Para. [0081], “[t]he coordinate module 222 determines the physical coordinates of the objects identified by the segmentation module 218 in each frame. For objects in motion, the coordinate module 222 may use temporal data to determine the distance travelled by the object (i.e. mouse) as well as the speed of the object”). Claim(s) 4, 7, 20, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Salem in view of Saliu as applied to claims 1 and 17 above, respectively, and further in view of Harada et al. (US 2019/0183089 A1), hereinafter Harada. Regarding claims 4 and 20, Salem in view of Saliu discloses the invention in claims 1 and 17 above, respectively, but is silent regarding the inner housing assembly including a cage floor on which is disposed at least one run wheel, and a tray disposed within the inner housing assembly to contain animal feed. Harada is in the field of an animal husbandry equipment (Abstract) and teaches a run wheel disposed (wheel 105, fig. 1), and a tray containing animal feed (food tray 109, fig. 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Salem such that there was a wheel disposed in the housing as taught by Harada, in order to monitor the physical activity of the subject in the cage. Furthermore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Salem such that there was a feeding tray as taught by Harada, in order to provide the subject in the cage with continuous food. As a result of the above modification, the wheel and food tray of Harada will be disposed in the inner housing assembly of Salem. Regarding claims 7 and 23, Salem in view of Saliu discloses the invention in claims 1 and 17 above, respectively, but does not appear to specifically disclose wherein the side portion of the outer housing assembly includes at least one or more of: a control panel adapted for mouse learning and memory assays, at least one air valve, at least one main water dispenser and valve, at least one secondary water dispenser and valve adapted for reward administration, at least one speaker and a microphone adapted for hearing acuity testing, at least one force meter for a grip bar adapted for muscle strength testing, at least one force meter for weight estimation, and at least one force meter for bite strength estimation. However, Harada is in the field of an animal husbandry equipment (Abstract) and teaches wherein the side portion of the outer housing assembly includes at least one or more of: a control panel adapted for mouse learning and memory assays, at least one air valve, at least one main water dispenser and valve, at least one secondary water dispenser and valve adapted for reward administration, at least one speaker and a microphone adapted for hearing acuity testing, at least one force meter for a grip bar adapted for muscle strength testing, at least one force meter for weight estimation, and at least one force meter for bite strength estimation11 (at least one air valve chosen; Para. [0022], “33 and 34 show optional air in and air out vents or ducts for the cage 35”; as shown in fig. 3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Salem such that the outer housing had at least one air valve as taught by Harada, in order to provide a modular system with access and control of the animal cage from the outside without disturbing the vertebrate animals in the inner housing. Furthermore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to position the air valve on the side portion of the cage, such that it does not interfere with the top portion when opening and closing the cage. Claim(s) 5 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Salem in view of Saliu as applied to claims 1 and 17 above, respectively, and further in view of Cheung et al. (US 2018/0055434 A1), hereinafter Cheung, and Brunner et al. (US 2003/0028327 A1), hereinafter Brunner. Regarding claims 5 and 21, Salem in view of Saliu discloses the invention in claims 1 and 17 above, respectively, and Salem further discloses wherein the outer housing assembly includes at least one or more of: a controller adapted to be a local hub to measure and integrate data associated with smart cage assays conducted using the two or more sensors, the controller operationally coupled at least one or more of wired and wirelessly (Para. [0044], “the SCORHE enclosure 100 includes a dual-camera system 120…each SCORHE enclosure includes a power supply (not shown) and at least one port and/or wire (not shown to transmit the output from the dual-camera system 120 to the computing device 202, as shown in FIG. 1, via a wired or wireline connection. Alternatively, the SCORHE enclosure 100 may include one or more wireless transmitters and receivers to transmit or receive data wirelessly), at least one or more of directly or by way of at least one other computer to report data to a central data processing system (Para. [0068], “[d]ata captured by the dual camera system 120 of the SCORHE enclosure 100 may be transmitted to the computing device 202 for video processing in a number of ways”); at least one overhead LED [strip] adapted to cover at least a portion of a horizontal dimension of the smart cage (Para. [0062], “in one embodiment, the light source is a pair of high-flux light emitting diode (LED) strips 150A-B”; as shown in figs. 5A & 9 the top of the panel 142 has LED strips 150A); at least one or more of an infrared camera and a near infrared camera adapted to record video substantially continuously from the cage (Paras. [0059-60], “[t]he Near-Infrared (NIR) Illumination System…the SCORHE dual-camera system 120 can record mouse activity for the entire duration of the diurnal cycle due, at least in part, to the NIR illumination system 140”), the video adapted to be used at least for individual animal position tracking (Para. [0048], “the cameras 122A-B are better suited to capture video that provides sufficient mouse shape detail, improve the accuracy of tracking multiple mice, and facilitate detection of mice in images despite frequent changes in bedding position and appearance”); and at least one or more of an infrared and a near infrared LED adapted to illuminate an interior portion of the inner smart cage (Para. [0060], NIR illumination system 140). However, modified Salem does not appear to specifically disclose wherein the top portion of the outer housing assembly includes the at least one or more sensor (i.e., the camera). Further Salem does not appear to specifically disclose an overhead LED screen, the overhead LED screen used to display at least a looming spot for vision assays. Cheung is in the field of behavior testing and training of animals (Para. [0002]) and teaches wherein the top portion of the outer housing assembly includes the at least one or more sensor (Para. [0172], “the subject camera 45 is mounted to the top of the enclosure 50”; as shown in fig. 1A). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the camera of Salem such that it was positioned on the top portion of the outer housing as taught by Cheung, in order to record the images of the subject from above and also keep the camera out of the way. Brunner is in the field of a system and method to assess animal behavior for measuring reaction in a cage (Abstract) and further teaches a visual screen to display at least a looming spot for vision assays (Para. [0258] & Para. [0263], the visual stimuli can be represented on a screen 35; as shown in fig. 3. Examiner notes, the screen is capable of projecting a looming spot). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Salem such that there was a visual screen in the cage as taught by Brunner, in order to monitor the physical effects of the screen on the test subject. Furthermore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the screen of Brunner such that it was an LED screen, in order to make the device more energy efficient and utilizing known technologies, with the reasonable expectation of success. Furthermore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the screen of Brunner such that it was positioned over the top of the cage such that it does not physically interfere with the subject in the cage. Claim(s) 6 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over modified Salem as applied to claims 5 and 21 above, respectively, and further in view of Kumar et al. (US 2020/0337272 A1), hereinafter Kumar. Regarding claims 6 and 22 above, modified Salem discloses the invention in claims 5 and 21 above, respectively, but is silent regarding at least one pass filter is disposed at least one or more of in front and behind at least one camera lens. However, Kumar is in the field of continuous animal behavioral monitoring (Abstract) and teaches wherein at least one pass filter is disposed at least one or more of in front and behind at least one camera lens (Para. [0145], “in order to block approximately all visible light from reaching the camera 210 during video data acquisition, IR long-pass filters can be employed. As an example, a physical IR long-pass filter can be employed with the camera(s) 110. This configuration can provide substantially uniform lighting regardless a light or dark phases in the arena 200”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the camera of Salem such that there was at least one pass filter disposed in front of the camera lens as taught by Kumar, in order to block all visible light from reach the camera during video data acquisition (Kumar: para. [0145]). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Salem in view of Saliu as applied to claim 1 above, and further in view of Ingley, III et al. (US 7,497,187 B2), hereinafter Ingley. Regarding claim 8, Salem in view of Saliu discloses the invention in claim 1, but does not appear to specifically disclose wherein the outside cage assembly includes at least one RFID reader. However, Ingley is in the field of an animal containment device (Abstract) and teaches wherein the outer cage assembly includes at least one RFID reader (Col. 6, lines 43-45, “the means for identifying the cage may include a radio frequency identification (RFID) tag that is attached to the cage”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Salem such that the outside cage assembly includes at least one RFID reader as taught by Ingley, in order to identify the cage. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Salem in view of Saliu as applied to claim 1 above, and further in view of Boguszewski et al. (US 2018/0271052 A1), hereinafter Boguszewski. Regarding claim 9, Salem in view of Saliu discloses the invention in claim 1, but does not appear to specifically disclose wherein more than one smart cage may be operationally coupled and configured to permit passage of animals therebetween by way of the one or more ports. However, Boguszewski is in the field of a system and method for testing spontaneous social interactions in group-housed mice (Abstract) and teaches wherein more than one smart cage may be operationally coupled and configured to permit passage of animals therebetween by way of the one or more ports (Para. [0041], “[0041] The housing compartments are bridged by a suitable number of tube-shaped corridors (105). These inter-territorial connections enable mice to freely travel between compartments and spend time with their preferred conspecific subgroup or favoured territory areas”; as shown in fig. 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the smart cage of Salem such that there was a tube-shaped corridor connecting each cage together as taught by Boguszewski, in order to enable mice to freely travel between compartments (Boguszewski: Para. [0041]). Claim(s) 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Salem as applied to claim 17 above, and further in view of Copeland et al. (US 2006/0185611 A1), hereinafter Copeland. Regarding claim 24, Salem in view of Saliu discloses the invention in claim 17, but does not appear to specifically disclose wherein a robotic arm assembly is operationally coupled to move horizontally and vertically substantially along the entirety of the height and width of the rack assembly and is further adapted to remove housing assemblies at least partly from the rack. However, Copeland is in the field of a robotic animal handling system (Abstract) and teaches a robotic arm assembly is operationally coupled to move horizontally and vertically substantially along the entirety of the height and width of the rack assembly and is further adapted to remove housing assemblies at least partly from the rack (Para. [0022], “[a]nimal cage 10 may be transported along top track 18 back and forth from animal cage racks 22 and one or more holding areas 50, e.g. 50a or 50b, using robotic arm 40”; see robotic arm 40 in figs. 2 & 5-6). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Salem such that there was a robotic adapted to remove housing assemblies at least partly from the rack as taught by Copeland, in order to automize the transportation of the housing assemblies. Response to Arguments Applicant's arguments filed on December 22, 2025 have been fully considered but they are not persuasive. Applicant argues (Remarks, p. 3) that "the Examiner alleges that Saliu teaches a 'plurality of models' with cross-checking via CNN classification and aggregation...Saliu does not, however, teach Applicant's claimed cross-checking configuration. Instead, Saliu describes CNN-based object detection...Saliu's classification scores are single model outputs (Saliu, [0099]), above, without cross-checking against interaction or exclusion models, as claimed by Applicant." Examiner notes, as set forth in the Final- Rejection on 10/21/2025, only one of the examples in the Markush grouping is required in order to meet the claimed limitation. Furthermore, the examiner clearly did not map anything to the "exclusion model", so it is unclear why Applicant is focused on Saliu teaching cross-checking against an exclusion model. However, Saliu does teach an interaction model as per Applicant's description of an "interaction model" in Para. [0061]. As shown in figs. 4c-4e of Saliu, after the object is detected, the model compares the objects to those of neighbor images to know whether it is part of a same object (para. [0096]-[0098]). Applicant argues (Remarks, p. 5) that "Saliu targets high-throughput microscopy for small organism such as Drosophila in unconstrained environments, which, because of the small size of the animals, would be in a comparatively flat environment, such as a petri dish or multi-well plate. A person of ordinary skill in the art would not combine these disparate systems absent hindsight." Examiner notes, in response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). Furthermore, Saliu discloses in at least Para. [0061] that "there are a multitude of applications which the tracking technology presented here enables or can advance in the future...Example model organisms that can be tracked include...small rodents such as mice and rats." Furthermore, in response to applicant's argument (remarks, p. 4) that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., that there are several different detectors directed at the same rodent crowd, each configured to look at different individual traits: one only look sat hair color and patterns, another only watches styles of walks and moves, a third notices who is pushing whom or walking together, a fourth makes sure no two individuals can magically occupy the same spot, and a fifth specializes in figuring out identities when an individual is hidden behind other individuals) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Furthermore, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., "independent information channels", p. 8) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Examiner notes, as set forth on p. 6 in the Final-Rejection, Saliu discloses "a plurality of models (as discussed in Para. [0182-184], the projection process cross references a plurality of models, or in the alternative, at least the YOLO series discloses a plurality of models)". Applicant keeps relying on the Conventional neural network (CNN) of Saliu as teaching different models, however, that is not what the Examiner relied upon. Conclusion All claims are identical to or patentably indistinct from, or have unity of invention with claims in the application prior to the entry of the submission under 37 CFR 1.114 (that is, restriction (including a lack of unity of invention) would not be proper) and all claims could have been finally rejected on the grounds and art of record in the next Office action if they had been entered in the application prior to entry under 37 CFR 1.114. Accordingly, THIS ACTION IS MADE FINAL even though it is a first action after the filing of a request for continued examination and the submission under 37 CFR 1.114. See MPEP § 706.07(b). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NEVENA ALEKSIC whose telephone number is (571)272-1659. The examiner can normally be reached Monday-Thursday 8:30am-5:30pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kimberly Berona can be reached at (571)272-6909. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.A./Examiner, Art Unit 3647 /Christopher D Hutchens/Primary Examiner, Art Unit 3647 1 Examiner notes: only one sensor is required to meet the claimed limitation. 2 Examiner note: only one of the examples in the Markush grouping is required. 3 Examiner notes: only one of the examples in the Markush grouping is required. 4 Examiner notes: only one of the examples in the Markush grouping is required. 5 Examiner notes: only one of the examples in the Markush grouping is required. 6 Examiner notes: only one sensor is required to meet the claimed limitation. 7 Examiner note: only one of the examples in the Markush grouping is required. 8 Examiner notes: only one of the examples in the Markush grouping is required. 9 Examiner notes: only one of the examples in the Markush grouping is required. 10 Examiner notes: only one of the examples in the Markush grouping is required. 11 Examiner notes: only one of the examples in the Markush grouping is required.
Read full office action

Prosecution Timeline

May 16, 2023
Application Filed
Apr 11, 2025
Non-Final Rejection — §103
Jul 02, 2025
Interview Requested
Jul 10, 2025
Examiner Interview Summary
Jul 10, 2025
Applicant Interview (Telephonic)
Jul 14, 2025
Response Filed
Oct 10, 2025
Final Rejection — §103
Dec 11, 2025
Interview Requested
Dec 19, 2025
Applicant Interview (Telephonic)
Dec 19, 2025
Examiner Interview Summary
Dec 22, 2025
Response after Non-Final Action
Feb 19, 2026
Request for Continued Examination
Mar 03, 2026
Response after Non-Final Action
Mar 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595082
ThermaSat Solar Thermal Propulsion System
2y 5m to grant Granted Apr 07, 2026
Patent 12589891
CARRIER ROCKET SYSTEM WITH CARRIER ROCKET AND LAUNCH ASSISTANCE UNIT
2y 5m to grant Granted Mar 31, 2026
Patent 12583583
HIGH-ALTITUDE PSEUDO SATELLITE CONTROL
2y 5m to grant Granted Mar 24, 2026
Patent 12582087
BACKPACK FOR CARRYING ANIMALS
2y 5m to grant Granted Mar 24, 2026
Patent 12570397
LIFT ENHANCEMENT ASSEMBLY OF AN AERIAL VEHICLE WITH FIXED WINGS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
74%
Grant Probability
83%
With Interview (+9.0%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 105 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month