Prosecution Insights
Last updated: April 19, 2026
Application No. 16/806,962

SYSTEMS AND METHODS FOR IMAGING OF AN ANATOMICAL STRUCTURE

Non-Final OA §103
Filed
Mar 02, 2020
Examiner
BUI PHO, PASCAL M
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
In Vivo Analytics Inc.
OA Round
7 (Non-Final)
65%
Grant Probability
Moderate
7-8
OA Rounds
3y 3m
To Grant
46%
With Interview

Examiner Intelligence

Grants 65% of resolved cases
65%
Career Allow Rate
271 granted / 418 resolved
-5.2% vs TC avg
Minimal -19% lift
Without
With
+-19.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
64 currently pending
Career history
482
Total Applications
across all art units

Statute-Specific Performance

§101
2.2%
-37.8% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
21.8%
-18.2% vs TC avg
§112
21.6%
-18.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 418 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/26/2025 has been entered. Response to Amendment Applicant’s amendments and remarks, filed 11/26/2025, are acknowledged. Rejections and/or objections not reiterated from previous office actions are hereby withdrawn. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application. Status of Claims Claims 21-40 are currently under examination. Priority The Applicant’s claim priority as a continuation application of application Ser. No. 15/621,983, filed on 06/13/2017, now US Patent 10575934, published date 03/032020 is acknowledged. Applicant’s claim for the benefit of priority under 35 U.S.C. 119(e) to provisional applications 62/382,679, filed 09/01/2016, 62/382,654, filed 09/01/2016, 62/350,128, filed 06/14/2016, 62/350,129, filed 06/14/2016, are acknowledged. Withdrawn Objections/Rejections The rejection of claims 22, 23 and 36, 37 under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), is withdrawn in view of applicant’s arguments and/or amendments. Response to Arguments Applicant’s responses and arguments filed 11/26/2025 regarding claim rejections under 35 USC 103 have been fully considered and found not persuasive for the following reasons. Regarding the claim rejections under 35 USC 103, Applicant argues the evidential references as not being obvious. In response, the examiner has relied upon the teachings of Wang’2012a and Wang’2012b since Wang’2013 incorporated these teachings in his disclosure in [0040] and [0041] rendering obvious their teachings to support the clarity of the rejection with providing better visualization of the images related to the claimed technology. Therefore, the examiner is considering the argument as not persuasive. Applicant further argues that Kriston fails to teach a data store including a plurality of position definitions and there is no motivation to combine the Wang’2013 with Kriston. In response, the examiner has relied upon Kriston to teach a storage system for storing protocols and instructions ([0025] “a storage system 126”), this storage storing data and protocols for performing the claimed tasks, with storing “information” related to protocols including multimodality images as references wherein for clarification information including location, contour of organs for registration ([0030]) for spatial placement with location and orientation ([0038]) therefore including a plurality of position definitions as reading for multimodalities, therefore teaching the storage system including a plurality of position definitions. Additionally, Wang’2013 teaches the listing of a plurality of position definitions with different fixed angles for imaging ([0085] and table 2) for the multimodal imaging protocols providing the motivation to use the storage system of Kriston for performing appropriate registration between medical images and reference images while providing additional information as motivation to combine Kriston and Wang’2013 as presented within the Office Action. Therefore, the examiner is considering the argument as not persuasive. Applicant further argues that the position of the mouse body is fixed for Wang’2013 and therefore does not teach the limitation. In response, the examiner has considered the position of the mouse body as the relative position of the mouse body relative to the imaging device since as a broad interpretation, one of ordinary skill in the art would recognize that the imaging analysis for comparing and registering medical images are performed considering the body orientation within the medical images therefore the orientation/position relative to the imaging device in order to be able to perform comparative and registration analysis. Therefore the examiner using a broad reasonable interpretation consider that the teachings of Wang’2013 disclose the plurality of positioning definitions, and the examiner finds the argument not persuasive. Applicant argues also that Nisnevich is not analogous art. In response, the examiner has presented that Wang’2013 teaches a system and method for relatively positioning the mouse/specimen relative to the scanning imaging device (Wang’2012a p.412 last ¶ to p.413 1st ¶) and lacking the consideration of the use of a position detector. The examiner introduced Nisnevich teaching the same consideration of orienting the specimen using mechanical parts for controlling the rotation/position of the specimen relative to the scanning imaging device as for clarification in col.45 3rd-4th ¶ to col.48 2nd ¶ wherein the concern for the relative orientation of the specimen relative to the scanning imaging device is solved with the use of a rotation encoder with optical sensor as taught by Nisnevich (Fig.40C-D) which is therefore considered as analogous art as being considered under the second prong consideration as for (2) the reference is reasonably pertinent to the problem faced by the inventor ( even if it is not in the same field of endeavor as the claimed invention) here presenting a rotation encoder for teaching relative positioning of the specimen relative to the scanning imaging device for assessing the position/orientation of the specimen when scanned as it was not specifically disclosed by Wang’2013 but was one of the concern for the claimed invention". Therefore the examiner is considering the argument as not persuasive. The Applicant also argues that Nisnevich does not teach the “position detector” with the additional limitation “to calibrate, using a plurality of unique position identifiers…, analyze a machine readable identifier…”. In response, since Wang’2013 is concerned about the angular positioning of the specimen/mouse, the examiner has relied upon Nisnevich for teaching an angular positioning encoder, wherein the encoder is known in the art to provide a code for the angle optically sensed by the encoder, the code and using a unique grid (col.45 3rd-4th ¶ to col.48 2nd ¶) as unique position identifiers being pre-calibrate prior experiment as presented by Walny and this calibration being used for calibrating the angle position of the holder wherein the specimen/mouse is placed prior and during the experimental imaging. Nisnevich specifically teaches the analysis of the optical sensor using the angle encoder for positioning the scanned specimen as for clarification in col.45 3rd-4th ¶ to col.48 2nd ¶ using imaging for the analysis therefore teaching a machine readable identifier to assist to identify the angular position during the scanning therefore for a first and second position, as angularly positioned by Wang’2013 as discussed above. Therefore the examiner find the argument not persuasive. Applicant argues that the references of record do not teach receive a processing protocol with a color intensity as a metric of interest for the applicable configuration…generate an average color intensity shown in at least the extracted portion of the first image data from the first location of the first image data based on the first position definition as claimed limitations. In response, the examiner has presented Wang’2013 with evidential references Wang’2012a and Wang’2012b teach the use of gray-scale images for performing the segmentation of the body of the rodent wherein Gunes teaches the use of the conversion of color to grayscales as providing advantages for image analysis in term of reducing computation numbers while preserving capabilities for computer vision, classification and segmentation wherein depending on the type of image analysis averaging the color channels would provide. However, Gunes teaches the determination of a weighted color defined grayscales defining a Mahalonobis distance (p.855 last ¶) as a color metric for characterizing features within the medical images therefore teaching since Wang’2013 with evidential references Wang’2012a and Wang’2012b teaches the use of gray-scale images for performing the segmentation and identification of features of the body of the rodent ([0039]) therefore combining Wang’2013 with evidential references Wang’2012a and Wang’2012b with Gunes, suggesting some applicability of averaged RGB color for defining a gray scale for some imaging analysis applications such segmentation (p.853 col.2 last ¶ to p.854 col.2 1st ¶), teaches to generate an average color intensity shown in at least the extracted portion of the first image data from the first location of the first image data based on the first position definition. Applicant amended the dependent claims 22-23, and 36-37 with subject matter changing and clarifying the scope of the claims. Applicant argues that the references of record do not teach these amended limitations. In response, the examiner is considering new grounds of rejection since the amendments introduce subject matter not previously prosecuted and therefore necessitating new grounds of rejection. The examiner is considering new references to address these limitations and finds the arguments as moot. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. For the purpose of clarity of the rejection, the limitations or part of limitation written in brackets, as exemplified as followed, are limitations or part of limitations that are not taught by the references: [...limitations not taught by reference...]. Claims 21, 24-28, 33-35, 38, 40, are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (USPN 20130034203 A1; Pub.Date 02/07/2013; Fil.Date 08/01/2012; hereafter Wang’2013), as evidenced by Wang et al. (2012 IEEE Trans. Med. Imaging 31:88-102; Pub.Date 2012; hereafter Wang’2012a) and Wang et al. (2012 Mol. Imaging Biol. 14:408-419; Pub.Date 2012; hereafter Wang’2012b), in view of Kriston et al. (USPN 20150317785 A1; Pub.Date 11/05/2015; Fil.Date 07/15/2015), in view of Nisnevich et al. (USPN 7588314 B1; Pat.Date 09/15/2009; Fil.Date 01/26/2006) in view of Walny et al. (FOR DE 102013200210 B3; Pub.Date 12/06/2014; Fil.Date 01/09/2013) and in view of Gunes et al. (2016 SIViP 10:853–860; Pub Date 10/2015). Regarding independent claim 21, Wang ‘2013 teaches a system designed for reproducibility of diagnosis imaging ([0095] “For physical imaging systems with reliable reproducibility of bed placement, the scan of the bed only needs to be performed once and will be consistent for different subjects” and Figs. 23-24 [0209] “As an aid to establishing the capability to obtain stable and reproducibility images of subject animals, a bench-top PET system 100, referred to as the PETbox4, was designed for integrated biological and anatomical preclinical imaging of mouse models” therefore a system capable of providing imaging results indicative of an in vivo experimental result ([0209]-[0211])) for aligning 2D/3D images from one or two different imaging sensors (Title, abstract, X-ray images and optical images) with the use of an atlas as reference positioning with angles defined around the craniocaudal axis providing two images: one coronal (0° angle) and one sagittal image (90° angle) (Fig. 2 and [0055] “This method requires both top-view and side-view 2D images as inputs” and Fig. 9 showing the different angle and modalities used in combination to perform the diagnostic imaging) therefore with the position of the animal being known and location of the same organ also (Figs. 15-16) wherein the system is used on anesthetized animals ([0215] therefore reading on for providing reproducible imaging results indicative of an in vivo experimental result, the imaging results for presentation via a display unit. Therefore, Wang’2013 teaches a system for providing reproducible imaging results indicative of an in vivo experimental result, the imaging results for presentation via a display unit, the system comprising: an image receiver configured to receive first image data and second image data obtained in conjunction with a positioning assembly, the positioning assembly configured to orient a subject in a plurality of positions, including at least a first position and a second position different from the first position (Figs. 7a and 7b showing the use of X-ray detector or camera detector for receiving image data at different angles, with the animal placed in the chamber (Fig. 23 and [0211] 116 movable to place the animal in the proper orientation as in Fig. 9)) wherein the first image data includes a first image and information associating the first image with the first position and the second image data includes a second image and information associating the second image with the second position with Wang’2013 teaching the imaging being performed at different angles according to a table, therefore each imaging corresponding to an angle of acquisition corresponding to indexing each image with the corresponding protocol with the corresponding angle position ([0108] and Fig. 9); [...a data store including a plurality of position definitions...], wherein a first position definition identifies, for an animal subject imaged at a first time while in the first position, a first location of an anatomical feature within an image of the subject animal in the first position at the first time, and wherein a second position definition identifies, for an animal subject imaged at a second time while in a second position, the second location of the anatomical feature within an image of the subject animal in the second position at the second time, since Wang’2013 teaches analysis of images of an animal subject with the use of a probabilistic atlas as reference positioning ([0201] probabilistic atlas being used for segmentation of abdominal organ with Fig. 2 and [0055] “procedure of the 2D/3D atlas registration...This method requires both top-view and side-view 2D images as inputs” and Fig. 9 for imaging at different angles with different combination of imaging modalities, therefore with the different orientation of the subject with the determination of the position of the organ using the probabilistic positioning of the considered organ using the alignment/registration of the animal image with the reference atlas images). The images are taken with angles defined around the craniocaudal axis providing two images one coronal (Fig. 9 0° angle) and one sagittal (Fig. 9 90° angle). The coronal image is the first image at a first position definition for a mouse test animal imaged at a first time and the sagittal image is the second image at a second position definition for the same mouse at a second time (Fig. 7A-B and [0081] “The principal axis can be rotated along the y axis by different view angles θ denotes the view angle” for a single imaging detector, therefore teaching at least the first and second images taken sequentially at different times). Wang’2013 teaches imaging the different locations of organs ([0025] and Fig. 12 “(d) shows the segmented organs divided into two groups comprising high-contrast organs and low-contrast organs; (e) shows the step of two statistical shape models being constructed for the high-contrast organs and low-contrast organs”), wherein initially the registration animal image and database image is performed with the surface of the body to obtain a registered atlas surface ([0087]-[0088]) and then teaching for the coronal image while in a first position, a first location for the anatomical feature at a first time and for the sagittal image while in a second position, a second location for the anatomical feature at a second time, as checked with the determination of a probability map of the registered organs to be positioned (Fig. 14 and [0027]) wherein the probability maps are initially interpreted as position definitions and therefore teaching also identify the first position definition for processing the first image data based on a determination that the first image data associates the first image with the first position; and identify the second position definition for processing the second image data, based on a determination that the second image data associates the second image with the second position , and [...wherein the data store further comprises...] a processing protocol for each of a plurality of configurations selected from the group consisting of (Wang’2013 Fig. 9 providing processing/registration protocol attached to each of the imaging configuration listed in Table): at least one or more of each of a configuration for at least one of a plurality of anatomical features (Wang’2013 listing the different organs being targeted due to their difference in rigidity to be able to be registered within the corresponding instructions ([0096]), experiments (Wang’2013 creating an atlas for the considered animal and providing preclinical imaging of the animal for co-registration with the considered constructed atlas (abstract)) , image data types (Wang’2013 using the two types of imaging devices such as optical and X-ray imaging devices (Fig. 9), and position assemblies (Wang’2013 with the different orientation angle of the animal versus the fixed orientation of the optical and X-ray cameras (Fig. 9)), wherein the at least one processing protocol includes instructions for generating an imaging result for a given configuration (Wang’2013 teaches the generation of a scoring ([0012] accuracy metrics for comparison, Fig. 11 and [0111]-[0112] for selected organs) for measuring the accuracy of registration of the positions of the subject organ with the same organs from the atlas), the instructions including at least one of thresholds for comparing images (Wang’2013 teaches the generation of a scoring ([0012] accuracy metrics for comparison, Fig. 11 and [0111]-[0112] for selected organs, such DICE coefficient [0108]), ranges for comparing images (Wang’2013 teaching the different ranges for the coefficient DICE for each different organs (Fig. 19)), and an equation with variables for representing a relationship between images (Wang’2013 teaching the equation for the coefficient DICE for the registration accuracy ([0108] equation 11, including variables representing the relationship between the two images)); [...a position detector configured to: calibrate, using a plurality of unique position identifiers, prior to an imaging session and analyze a machine readable identifier associated with the positioning assembly to assist with identifying with the first and second position; identify the first position definition for processing the first image data based on a determination that the first image data associates the first image with the first position; and identify the second position definition for processing the second image data, based on a determination that the second image data associates the second image with the second position; and..] an image processor ([0113] “ PC with a 3.05 GHz CPU and 5.99 GB RAM” is using a program using “IDL 7.1 (ITT Visual Information Solutions, Boulder, Colo., USA)” providing an online access to a software) configured to: determine an applicable configuration from the plurality of configurations (Wang’2013 teaches the determination of the applicable configuration according to the chosen protocol from the Table from Fig. 9 applied to the selected organs for the animal); receive a processing protocol […with a color intensity as a metric of interest…] for the applicable configuration, the applicable configuration identifying a comparison for image data and an associated result based thereon ([0113] “The registration workflow was programmed with IDL 7.1...The Elastix™ toolbox was accessed online by the IDL program” teaching the registration protocol as a processing protocol being received online, wherein the registration process implicitly compares the first and second image data to the atlas for registering the identified organs and their locations to provide the results of the identified registered organs ([0010] “ prerequisite for this work is the existence of a digital mouse atlas. This atlas, in a preferred embodiment, is registered to a top-view X-ray projection, a side-view optical camera photo and/or a laser surface scan of the subject animal and helps to approximate the subject organ regions.”) leading to the determination of the level of the organ region identification ([0077] “to provide definition of organ regions and to facilitate the evaluation of registration accuracy” with the determination/validation of the registration accuracy via the use of the Dice function ([0107]-[0108])). Wang’2013 further teaches that the processing control including for each of the first and second images one or more locations of input image data to compare and the applicable configuration further providing instructions for how to compare the indicated input image data and how to present comparison data; more specifically, Wang’2013 teaches the registration as a comparison protocol to apply to specific organs at one or more locations of the acquired X-ray or optical image (Fig.13b with imaging different organs) and the atlas images as input image data with registering the different organs of interest (Fig.13B) therefore indicating one or more locations of the input image data to register/compare as acquired first image of the animal subject in the first position at a first time and as acquired second image of the animal subject in the second position at a second time as discussed above with the comparison being performed with the registration protocol with the animal subject atlas for the different organs with assessing the different level of accuracy for the different organs or locations of the input image data for how to compare the indicated input image data ([0087] and Fig.8 for the registration of the coronal and sagittal images and [0108] “For each registration result, the registration accuracy of each organ was measured using the Dice coefficient” for how to compare the indicated input image data with the processor performing the comparison for accuracy of registration ([0178]-[0181])). Wang’2013 further teaches that the associated result indicates an output imaging result to provide for a comparison result; more specifically, Wang’2013 teaches the associate result as registration accuracy indicating an output imaging result being defined by the images including each of the targeted organ probability maps being overlaid on the anatomical images after registration ([0177] and Fig.15 and Fig.16 “the results of organ probability maps overlaid on non-contrast micro-CT images. Coronal and sagittal slices of different subjects are presented. The probability maps are sown in gray scale, with the brighter portion representing the probability value (brighter portions represent higher probability)” and [0111] and Figs.10A-10D for providing the Dice index for the registration accuracy) for providing visual assessment of the registration accuracy and with a quantitative assessment of the registration accuracy via DICE determination as index providing a comparison result between the different organs with evidential reference Wang’2012a showing Figs. 4 and 5 describing the same results in color with each color being directed to each target organ and the brightness of each color representing the probability of the registered organ; extract a portion of the first image data from the first location of the first image data based on the first position definition; extract a portion of the second image data from the second location of the second image data based on the second position definition wherein as discussed above the first image data is the coronal image and the second image data is the sagittal image with the extraction being performed by segmentation ([0177] and Fig.16 “ FIG. 16 compares the mean shapes of registration results... S` stands for manual segmentation result, and `R` stands for registration result” for the whole body and for each of the targeted organs for the first and second location where Wang’2012a provides a better image of the segmented whole body and organs (See Fig.4 and Fig.5)). The comparison between the animal images and the probability maps from the database images is necessary using the first and the second image data in order to perform the comparison with the accuracy for the comparison is provided with the accuracy analysis ([0179]-[0181] as applied to test images). generate comparison data according to the applicable configuration identified in the processing protocol using extracted portions of the first image data and the second image data at the one or more locations identified by the processing protocol wherein Wang’2013 teaches the DICE index as comparison data being generated ([0115] “Based on FIGS. 10A-10D, it can be seen that different organs have different levels of accuracy”, [0117] and Fig. 10A-D reporting the accuracy of the registration of the first and second images) with the registration of the different organs as the comparison identified in the processing protocol and the DICE index as representing the registration accuracy from the different organs representing the one and more locations and identified by registration/comparison from the first portion of the locations to the first image data and second image data; […generate an average color intensity shown in at least the extracted portion of the first image data from the first location of the first image data based on the first position definition…]; generate an imaging result according to the processing protocol using the comparison data wherein Wang’2013 the probability mapping of the coronary and sagittal image using the registration accuracy (([0115] “Based on FIGS. 10A-10D, it can be seen that different organs have different levels of accuracy”, [0117] and Fig. 10A-D reporting the accuracy of the registration of the first and second images)); and cause presentation of the imaging result via the display unit ([0177] with Fig.15 and Fig.16 with evidential reference Wang’2012a showing Figs. 4 and 5 describing the same results in color with each color being directed to each target organ and the brightness of each color representing the probability of the registered organ, wherein “Registration result of organ probability maps, overlaid with non-contrast micro-CT images of different subjects” for Fig. 4 and “Visual comparison of the registration results with human segmentation results, based on contrast-enhanced micro-CT images” for Fig. 5 teaching the display of the registration results on a display for visualization). Wang’2013 with evidential references Wang’2012a and Wang’2012b does not specifically teach a data store including a plurality of position definitions and a processing protocol for each of a plurality of configurations and a position detector configured to: calibrate, using a plurality of unique position identifiers, prior to an imaging session and analyze a machine readable identifier associated with the positioning assembly to assist with identifying with the first and second position, identify the first position definition for processing the first image data based on a determination that the first image data associates the first image with the first position; and identify the second position definition for processing the second image data, based on a determination that the second image data associates the second image with the second position, a processing protocol with a color intensity as a metric of interest and generate an average color intensity shown in at least the extracted portion of the first image data from the first location of the first image data based on the first position definition as in claim 21. However, Kriston teaches within the same field of endeavor of imaging using multi-modalities (Title, abstract) the common practice for image processing to have a storage system for storing protocols and instructions ([0025] “a storage system 126 that communicates with at least some of the modules 121-125. The modules include a feature-image generator 121 that is configured to analyze a medical image to generate an image (e.g., a feature image) that is based on the medical image and includes a designated anatomical feature”) and images, features and image information ([0029] “The atlas generator 125 may receive and store designated medical images, including feature images, in the storage system 126 or other storage system. The atlas generator 125 may designate the medical image(s) as being part of an anatomical atlas. The atlas generator 125 may assign identifying labels/orientation/positions or other information to the medical images. For example, the information may be in accordance with established protocols”) therefore storing protocols and instructions ([0025] “a storage system 126”), this storage storing data and protocols for performing the claimed tasks, with storing “information” related to protocols including multimodality images as references wherein for clarification information including location, contour of organs for registration ([0030]) for spatial placement with location and orientation ([0038]) therefore including a plurality of position definitions as reading for multimodalities, therefore teaching the storage system including a plurality of position definitions. Additionally, Wang’2013 teaches the listing of a plurality of position definitions with different fixed angles for imaging ([0085] and table 2) for the multimodal imaging protocols providing the motivation to use the storage system of Kriston for performing appropriate registration between medical images and reference images while providing additional information as motivation to combine Kriston and Wang’2013. The combination Wang’2013 and Kriston is teaching the use of a data store including a plurality of position definitions and a processing protocol for each of a plurality of configurations as claimed. Therefore it would have been obvious for a person of ordinary skill in the art before the time of filling the invention to have adapted the system of Wang’2013 with evidential references Wang’2012a and Wang’2012b such that the system comprises a data store including a plurality of position definitions and a processing protocol for each of a plurality of configurations, since one of ordinary skill in the art would recognize that using a system storage for storing images and all information regarding the images and the associated protocols was common practice in the art, as taught by Kriston and since positioning the test animal at different angles are known to be part of the established protocol as taught by Wang’2013. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Wang’2013 and Kriston teach the development of an imaging atlas as related to multi-modalities. The motivation would have been to provide all the necessary information for performing the image processing such as image registration or providing additional information as necessary, as suggested by Kriston ([0029]-[0031]). Wang’2013 with evidential references Wang’2012a and Wang’2012b and modified with Kriston does not specifically teach a position detector configured to: calibrate, using a plurality of unique position identifiers, prior to an imaging session and analyze a machine readable identifier associated with the positioning assembly to assist with identifying with the first and second position, identify the first position definition for processing the first image data based on a determination that the first image data associates the first image with the first position; and identify the second position definition for processing the second image data, based on a determination that the second image data associates the second image with the second position, a processing protocol with a color intensity as a metric of interest and generate an average color intensity shown in at least the extracted portion of the first image data from the first location of the first image data based on the first position definition as in claim 21. However, since Wang’2013 with evidential references Wang’2012a and Wang’2012b teaches the rotational positioning of the specimen which is immobile within the holder is known from the hardware setup (Wang’2012a p.412 last ¶ to p.413 1st ¶) therefore teaching a system and method for relatively positioning the mouse/specimen relative to the scanning imaging device and lacking the consideration of the use of a position detector. However Nisnevich teaching the same consideration of orienting a tool/specimen using mechanical parts for controlling the rotation/position of the specimen relative to the scanning imaging device (col.45 3rd-4th ¶ to col.48 2nd ¶) wherein the concern for the relative orientation of the specimen relative to the scanning imaging device is solved with the use of a rotation encoder with optical sensor by Nisnevich (Fig.40C-D) and therefore Nisnevich teaches within the same field of endeavor of controlling the mechanical rotation of elements for imaging (Title, abstract and Figs. 40A-D) the common use of rotational position encoder including a shaft encoder sensor as position detector for detecting the angular position of a rotating axis (col.45 3rd-4th ¶ and Fig.40C element 468) with the rotating axis with fixed on it a shaft encoder disc as presenting a plurality of unique position markers (col.45 3rd–4th ¶ element 466) for measuring the rotation position of the rotating element. Additionally, Nisnevich is teaching an angular positioning encoder, wherein the encoder is known in the art to provide a code for the angle optically sensed by the encoder, the code and using a unique grid (col.45 3rd-4th ¶ to col.48 2nd ¶) as unique position identifiers being pre-calibrated prior experiment as presented by Walny and this calibration being used for calibrating the angle position of the rotating holder wherein the specimen/mouse is placed prior and during the experimental imaging. Nisnevich specifically teaches the analysis of the optical sensor using the angle encoder for positioning the scanned specimen (col.45 3rd-4th ¶ to col.48 2nd ¶) using imaging/optical sensing for the analysis therefore teaching a processing/machine readable identifier to assist to identify the angular position during the scanning therefore for a first and second position, as angularly positioned by Wang’2013 as discussed above, wherein, according to Walny, a rotational position measuring system would necessary be calibrated in order to measure the rotational angles (p.19 10th–12th ¶ and p.20 2nd ¶) therefore Nisnevich and Walny teaching a position detector configured to: calibrate, using a plurality of unique position identifiers, prior to an imaging session and analyze a machine readable identifier associated with the positioning assembly to assist with identifying with the first and second position, with the use of the rotational measuring system with the angular position encoder to identify the first position definition for processing the first image data based on a determination that the first image data associates the first image with the first position; and identify the second position definition for processing the second image data, based on a determination that the second image data associates the second image with the second position. Therefore it would have been obvious for a person of ordinary skill in the art before the time of filling the invention to have adapted the system of Wang’2013 with evidential references Wang’2012a and Wang’2012b and modified in view of Kriston such that the system comprises a position detector configured to: calibrate, using a plurality of unique position identifiers, prior to an imaging session and analyze a machine readable identifier associated with the positioning assembly to assist with identifying with the first and second position, identify the first position definition for processing the first image data based on a determination that the first image data associates the first image with the first position; and identify the second position definition for processing the second image data, based on a determination that the second image data associates the second image with the second position, since one of ordinary skill in the art would recognize that using rotational positioning system with encoder for positioning calibration and measuring angular position corresponding to acquired images was known in the art as taught by Nisnevich and Walny. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Wang’2013 and Nisnevich, Walny are all teaching the use of calibrated angular positioning measuring system for tracking the location and position of object or subject. The motivation would have been to provide an accurate determination of the rotation angle subjected by the test animal to provide an optimal position or registration between images, as suggested by Nisnevich and Walny (p.19 last ¶). Wang’2013 with evidential references Wang’2012a and Wang’2012b and modified with Kriston, Nisnevich, Walny does not specifically teach a processing protocol with a color intensity as a metric of interest and generate an average color intensity shown in at least the extracted portion of the first image data from the first location of the first image data based on the first position definition as in claim 21. However, while Wang’2013 with evidential references Wang’2012a and Wang’2012b teach the use of gray-scale images for performing the segmentation of the body of the rodent as discussed above, Gunes teaches within the same field of endeavor of optical imaging (Title and abstract) the common practice of converting color images into gray-scale image (abstract) knowing that color images provide information for computer vision, classification, segmentation (p.853 col.2 1st ¶) and color to grayscale conversion provide a reduction in numbers while preserving capabilities to perform computer vision, classification, segmentation using an averaging method between the red, green and blue channel to generating a grayscale intensity as an average color intensity (p.853 col.2 last ¶ to p.854 col.2 1st ¶). Gunes further teaches the determination of a weighted color defined grayscales defining a Mahalonobis distance (p.855 last ¶) as a color metric for characterizing features within the medical images therefore teaching since Wang’2013 with evidential references Wang’2012a and Wang’2012b teaches the use of gray-scale images for performing the segmentation and identification of features of the body of the rodent ([0039]) therefore combining Wang’2013 with evidential references Wang’2012a and Wang’2012b with Gunes, suggesting some applicability of averaged RGB color for defining a gray scale for some imaging analysis applications such segmentation (p.853 col.2 last ¶ to p.854 col.2 1st ¶) therefore teaching a processing protocol with a color intensity as a metric of interest and generate an average color intensity shown in at least the extracted portion of the first image data from the first location of the first image data based on the first position definition as claimed. Therefore it would have been obvious for a person of ordinary skill in the art before the time of filling the invention to have adapted the system of Wang’2013 with evidential references Wang’2012a and Wang’2012b and modified in view of Kriston, Nisnevich and Walny such that the system comprises a processing protocol with a color intensity as a metric of interest and generate an average color intensity shown in at least the extracted portion of the first image data from the first location of the first image data based on the first position definition, since one of ordinary skill in the art would recognize that using color images and performing color to grayscale conversion for classification and segmentation was known in the art as taught by Gunes and since using weighting/averaging techniques for defining an average color intensity to reduce the analysis time and volume with an optimization process for performing these computer vision analysis such as classification and segmentation was also known in the art as taught by Gunes. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Wang’2013 and Gunes are teaching the use of optical imaging for image analysis such as segmentation The motivation would have been to provide a simpler and optimal approach for segmenting object such as subject contour when starting more information such with color imaging and a direct conversion to specific optimal grayscale, as suggested by Gunes (p.853 col.2 last ¶ to p.854 col.2 1st ¶). Regarding the dependent claims 24-28, all the elements of these claims are instantly disclosed or fully envisioned by the combination of Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Gunes. Regarding claim 24, Wang’2013 teaches wherein the animal subject imaged at the first time is an animal test subject, and wherein the animal subject imaged at the second time is the animal test subject (Fig. 7A-B imaging using common laboratory mouse subject as in [0003] “The laboratory mouse is widely used as animal model in pre-clinical cancer research and drug development. Acquiring actual anatomy of a laboratory animal, such as a mouse, is frequently needed for localizing and quantifying functional changes. Currently in vivo imaging of mouse anatomy is achieved with PET, SPECT, and optical imaging modalities or tomographic imaging systems such as micro-CT and micro-MR as imaged with modalities. Also, anatomical imaging is used to measure organ morphometry, quantify phenotypical changes and build anatomical models”). Regarding claim 25, Wang’2013 teaches the comparison as registration of the first and second image data with the atlas image data for determining the mapping probability for registration for different targeted organs for determining the probability maps on the image data to visualize the organs using segmentation for separating high contrast organs and low contrast organs ([0177] and Figs.15 and 16 with Wang’2012a showing the same results in color in Figs.4 and 5 for differentiating the pixels corresponding to the different organs) with as discussed above the determination of the accuracy of the calculations by performing comparative analysis between image data and probability mapping therefore teaching the first location identifies a first one or more pixel locations for the anatomical feature shown in the first image data of the subject in the first position, the second location identifies a second one or more pixel locations for the anatomical feature shown in the second image data of the subject in the second position and wherein the image processor is configured to generate comparison data based on a comparison between values of pixels at the first one or more pixel locations and at the second one or more pixel locations. Regarding claim 26, Wang’2013 teaches to generate the imaging result (Fig.14 and [0027] “ probability maps of the registered organs are overlaid” with overlaid probability maps for the registered organs as imaging result) by comparing via registration to determine the registration accuracy via the Dice index ([0108] “Dice coefficient”), first pixel values at the one or more pixel locations of the anatomical feature shown in the first image data with second pixel values at the one or more pixel locations of the anatomical feature shown in the second image data ([0026] Fig. 13A-C presenting the first 3 eigenvalues for the organ models to be registered with the first and second image data and [0027] and Fig. 14 step (i) “probability maps of the registered organs are overlaid” and (j) with Wang’2012a describing in the same Fig. 3 the step (j) as “volume rendering the organ probability maps” with [0177] with Fig.15 and Fig.16 with evidential reference Wang’2012a showing Figs. 4 and 5 describing the same results in color with each color being directed to each target organ and the brightness of each color representing the probability of the registered organ with the associated segmentation for analysis of each organ,, wherein “Registration result of organ probability maps, overlaid with non-contrast micro-CT images of different subjects” for Fig. 4 and “Visual comparison of the registration results with human segmentation results, based on contrast-enhanced micro-CT images” for Fig. 5 with each image presenting processes and extracted pixels). Additionally, as discussed for claim 21, since Gunes teaches the generation of grayscale images defined with averaged color intensity, Gunes teaches determine a region of interest by calculating a plurality of average image color intensities using a pool of images with different color intensities. Regarding claim 27, Wang’2013 teaches the similarity between the pixel and voxel for image processing with accounting to corrections ([0082] “... the pixel p' is assigned the value I = I 0 . e x p ⁡ [ ∫ a 1 a 2 μ s d s ] (6) where I is the pixel value, I0 is the source energy, µ(s) is the linear tissue attenuation coefficient along the emitted x-ray, and ∫ a 1 a 2 d s denotes the line integration along the emitted x-ray. The coefficient µ(s) can be obtained from the CT image that was used for simulating the subject. Since the CT-images are contrast-enhanced, the voxel intensities will not exactly reflect the tissue attenuation coefficients in the non-contrast-enhanced mouse projection images that we anticipate. To eliminate the influence of contrast agent, the voxel intensities of contrast-enhanced organs (the liver and spleen which were already segmented) are scaled down to the level of brain intensity”) and since Wang’2013 teaches claim 25 which recites the same limitations with pixel instead of voxel as in claim 27, therefore the combination of Wang’2013, Wang’2012a, Wang’2012b, Kriston Nisnevich, Walny and Gunes teaches claim 27. Regarding claim 28, Wang’2013 teaches claim 26 reciting the same limitations than claim 28 with reciting pixel instead of voxel recited in claim 28. Therefore the combination of Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Gunes teaches claim 28. Regarding independent claim 33, claim 33 is directed to a general image processing system with the claim limitations being the same words for words than the claim limitations for the system of claim 21 with the only difference being the system of claim 33 receiving the images from an imaging device as the system of claim 21 receiving more generically the images for analysis. Since, as discussed for claim 21, Wang’2013 is teaching the system as receiving the images from an imaging device (Figs. 7a and 7b showing the use of X-ray detector or camera detector for receiving image data at different angles, with the animal placed in the chamber (Figs. 23-24 and [0209]-[0211] 116 movable to place the animal in the proper orientation as in Fig. 9)), and since the combination of Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Gunes teaches all the limitations of claim 21, therefore claim 33 is therefore made obvious by the teachings discussed above mutandis mutatis. Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Gunes teach claim 33. Regarding the dependent claims 34, 35, 38, all the elements of these claims are instantly disclosed or fully envisioned by the combination of Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Gunes. Regarding claim 34, claim 34 is a combination of part of claim 21 and claims 25-26, wherein Wang’2013 teaches the image data comprises first image data and second image data (Figs. 7a and 7b showing the use of X-ray detector or camera detector for receiving image data at different angles) with the coronal image (0° angle) and sagittal image (90° angle) are the first and second image data for performing registration with the atlas images. Wang’2013 teaches the analysis of images of an animal subject with the use of an atlas as reference positioning (Fig. 2 and [0055] “procedure of the 2D/3D atlas registration. This method requires both top-view and side-view 2D images as inputs” and Fig. 9 for imaging at different angles with different combination of imaging modalities). The images are taken with angles defined around the craniocaudal axis providing two images one coronal (Fig. 9 0° angle) and one sagittal (Fig. 9 90° angle). The coronal image is the first image at a first position definition for a mouse test animal imaged at a first time and the sagittal image is the second image at a second position definition for the same mouse at a second time (Fig. 7A-B and [0081] “The principle axis can be rotated along the y axis by different view angles θ denotes the view angle” for a single imaging detector, therefore teaching the plurality of position definitions comprising the one for the coronal image and the one for the sagittal image taken sequentially at different times). Wang’2013 teaches imaging the different locations of organs ([0025] and Fig. 12 “(d) shows the segmented organs divided into two groups comprising high-contrast organs and low-contrast organs; (e) shows the step of two statistical shape models being constructed for the high-contrast organs and low-contrast organs”) therefore teaching for the coronal image while in a first position, a first location for the anatomical feature at a first time and for the sagittal image while in a second position, a second location for the anatomical feature at a second time therefore teaching the coronal image defining the first position definition that identifies, for a given subject imaged at a first time while in a first position, a first location for an anatomical feature at the first time and the sagittal image defining a second position definition that identifies, for a given subject imaged at a second time while in a second position, a second location for the anatomical feature at the second time therefore teaching the plurality of position definitions comprises: a first position definition that identifies, for a given subject imaged at a first time while in a first position, a first location for an anatomical feature at the first time, and a second position definition that identifies, for a given subject imaged at a second time while in a second position, a second location for the anatomical feature at the second time. Additionally, Wang’2013 teaches the registration of the first and second image data with the atlas image data for determining the mapping probability for registration for different targeted organs for determining the probability maps on the image data to visualize the organs using segmentation for separating high contrast organs and low contrast organs ([0177] and Figs.15 and 16 with Wang’2012a showing the same results in color in Figs.4 and 5 for differentiating the pixels corresponding to the different organs) therefore teaching the first location identifies one or more pixel locations for the anatomical feature shown in the first image data, the second location identifies one or more pixel locations for the anatomical feature shown in the second image data. The image processor is further configured to generate the imaging result (Fig.14 and [0027] “ probability maps of the registered organs are overlaid” with overlaid probability maps for the registered organs as imaging result) by comparing organs via registration with the determination of the registration accuracy via the Dice index ([0108] “Dice coefficient”), first pixel values at the one or more pixel locations of the anatomical feature shown in the first image data with second pixel values at the one or more pixel locations of the anatomical feature shown in the second image data ([0026] Fig. 13A-C presenting the first 3 eigenvalues for the organ models to be registered with the first and second image data and [0027] and Fig. 14 step (i) “probability maps of the registered organs are overlaid” and (j) with Wang’2012a describing in the same Fig. 3 the step (j) as “volume rendering the organ probability maps” with [0177] with Fig.15 and Fig.16 with evidential reference Wang’2012a showing Figs. 4 and 5 describing the same results in color with each color being directed to each target organ and the brightness of each color representing the probability of the registered organ with the associated segmentation for analysis of each organ, wherein “Registration result of organ probability maps, overlaid with non-contrast micro-CT images of different subjects” for Fig. 4 and “Visual comparison of the registration results with human segmentation results, based on contrast-enhanced micro-CT images” for Fig. 5 with each image presenting processes and extracted pixels). Regarding claim 35, claim 35 is a combination of part of claim 21 and claims 27-28, wherein Wang’2013 teaches the image data comprises first image data and second image data (Figs. 7a and 7b showing the use of X-ray detector or camera detector for receiving image data at different angles) with the coronal image (0° angle) and sagittal image (90° angle) are the first and second image data for performing registration with the atlas images. Wang’2013 teaches the analysis of images of an animal subject with the use of an atlas as reference positioning (Fig. 2 and [0055] “procedure of the 2D/3D atlas registration. This method requires both top-view and side-view 2D images as inputs” and Fig. 9 for imaging at different angles with different combination of imaging modalities). The images are taken with angles defined around the craniocaudal axis providing two images one coronal (Fig. 9 0° angle) and one sagittal (Fig. 9 90° angle). The coronal image is the first image at a first position definition for a mouse test animal imaged at a first time and the sagittal image is the second image at a second position definition for the same mouse at a second time (Fig. 7A-B and [0081] “The principle axis can be rotated along the y axis by different view angles θ denotes the view angle” for a single imaging detector, therefore teaching the plurality of position definitions comprising the one for the coronal image and the one for the sagittal image taken sequentially at different times). Wang’2013 teaches imaging the different locations of organs ([0025] and Fig. 12 “(d) shows the segmented organs divided into two groups comprising high-contrast organs and low-contrast organs; (e) shows the step of two statistical shape models being constructed for the high-contrast organs and low-contrast organs”) therefore teaching for the coronal image while in a first position, a first location for the anatomical feature at a first time and for the sagittal image while in a second position, a second location for the anatomical feature at a second time therefore teaching the coronal image defining the first position definition that identifies, for a given subject imaged at a first time while in a first position, a first location for an anatomical feature at the first time and the sagittal image defining a second position definition that identifies, for a given subject imaged at a second time while in a second position, a second location for the anatomical feature at the second time therefore teaching the plurality of position definitions comprises: a first position definition that identifies, for a given subject imaged at a first time while in a first position, a first location for an anatomical feature at the first time, and a second position definition that identifies, for a given subject imaged at a second time while in a second position, a second location for the anatomical feature at the second time. Additionally, Wang’2013 teaches the registration of the first and second image data with the atlas image data for determining the mapping probability for registration for different targeted organs for determining the probability maps on the image data to visualize the organs using segmentation for separating high contrast organs and low contrast organs ([0177] and Figs.15 and 16 with Wang’2012a showing the same results in color in Figs.4 and 5 for differentiating the pixels corresponding to the different organs) therefore teaching the first location identifies one or more pixel locations for the anatomical feature shown in the first image data, the second location identifies one or more pixel locations for the anatomical feature shown in the second image data. Additionally, Wang’2013 teaches the similarity between the pixel and voxel for image processing with accounting to corrections ([0082] “... the pixel p' is assigned the value I = I 0 . e x p ⁡ [ ∫ a 1 a 2 μ s d s ] (6) where I is the pixel value, I0 is the source energy, µ(s) is the linear tissue attenuation coefficient along the emitted x-ray, and ∫ a 1 a 2 d s denotes the line integration along the emitted x-ray. The coefficient µ(s) can be obtained from the CT image that was used for simulating the subject. Since the CT-images are contrast-enhanced, the voxel intensities will not exactly reflect the tissue attenuation coefficients in the non-contrast-enhanced mouse projection images that we anticipate. To eliminate the influence of contrast agent, the voxel intensities of contrast-enhanced organs (the liver and spleen which were already segmented) are scaled down to the level of brain intensity”) and since Wang’2013 teaches claim 34 recited the same limitations with pixel instead of voxel as in claim 35, therefore the combination of Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Gunes teaches claim 35. Regarding claim 38, Wang’2013 teaches the acquisition of micro-CT image data (Fig. 9) therefore teaches the image data comprises non-optical image data. Regarding independent claim 40, claim 40 is directed to a computer implemented method which all the method step limitations are being recited as the functional limitations with the same wordings as recited for the system of claim 33. Since, as discussed for claim 33, Wang’2013 is teaching using a PC with a 3.05 GHz CPU and 5.99 GB RAM ([0072]) for executing the methods therefore teaching a computer implemented method, and since the combination of Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Gunes teaches all the limitations of claim 33, therefore claim 40 is therefore made obvious by the teachings discussed above mutandis mutatis. Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Gunes teach independent claim 40. Claims 22, 23 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (USPN 20130034203 A1; Pub.Date 02/07/2013; Fil.Date 08/01/2012; hereafter Wang’2013), as evidenced by Wang et al. (2012 IEEE Trans. Med. Imaging 31:88-102; Pub.Date 2012; hereafter Wang’2012a) and Wang et al. (2012 Mol. Imaging Biol. 14:408-419; Pub.Date 2012; hereafter Wang’2012b), in view of Kriston et al. (USPN 20150317785 A1; Pub.Date 11/05/2015; Fil.Date 07/15/2015), in view of Nisnevich et al. (USPN 7588314 B1; Pat.Date 09/15/2009; Fil.Date 01/26/2006) in view of Walny et al. (FOR DE 102013200210 B3; Pub.Date 12/06/2014; Fil.Date 01/09/2013) and in view of Gunes et al. (2016 SIViP 10:853–860; Pub Date 10/2015) as applied to claims 21 and further in view of Troy et al. (2004 Molecular Imaging 3:9-23; Pub.Date 2004). Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Gunes teach a system as set forth above. Wang’2013 teaches the image receiver is further configured to receive the first image data from a first sensing device (Fig. 7A-B with receiving the coronal image (0° angle) from X-ray or optical camera) wherein the analysis is performed using processors. Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Gunes do not specifically teach an optical calibrator enabling the translation of a physical quantity to a biological quantity. However, Troy teaches within the same field of endeavor of imaging animal model for medical applications (Title, abstract) the use of an optical system (Fig.2) for optically scanning the animal model for bioluminescence signals with the instrument being calibrated in physics units of radiance and further calibrated to convert these units into biological units representing the number of cells presenting bioluminescence (abstract, p.12 col.1 2nd ¶, p.13-14 ¶ Instrument Calibration and Fig. 9 providing the physical signal into biological activity for bioluminescence/fluorescence calibration as applied to animal images as in Figs. 12-13) therefore teaching, since one of ordinary skill in the art would have recognize that separate programs could be executed within the same processor using an optical sensor/camera for capturing optical radiance as image of the mouse and since that the position detector is also set with a processor processing the optical signal for the position of the animal support as discussed above, an optical calibrator (as a processor) enabling the translation of a physical quantity (radiance as bioluminescence/fluorescence from the optical sensor) to a biological quantity (positively labeled cell count for counting) as claimed. Therefore it would have been obvious for a person of ordinary skill in the art before the time of filling the invention to have adapted the system of Wang’2013 with evidential references Wang’2012a and Wang’2012b and modified in view of Kriston, Nisnevich and Walny such that the system further comprises: an optical calibrator enabling the translation of a physical quantity to a biological quantity, since one of ordinary skill in the art would recognize that using an optical system for acquiring in-vivo bioluminescence/biofluorescence with a processor for converting the radiation into a calibrated bioluminescent/fluorescent cell count was known in the art as taught by Troy. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Wang’2013 and Troy are teaching the use of optical imaging for image analysis for analyzing animal model. The motivation would have been to provide a calibrated system for accessing in-vivo monitoring of biological activity in animal model for scientific and medical discovery, as suggested by Troy (abstract). Regarding the dependent claim 23, all the elements of these claims are instantly disclosed or fully envisioned by the combination of Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Troy. Regarding claim 23, Wang’2013 teaches the image receiver is further configured to receive the first image data from a second sensing device (Fig. 7A-B with receiving the coronal image (0° angle) from X-ray and/or optical camera) and Troy teaches also the use of additional optical camera for bioluminescence/fluorescence imaging and analysis wherein the calibration of the instrument shows a proportionality factor between the bioluminescence/fluorescence magnitude and the cell count (Fig. 9) teaching therefore the optical calibrator generates a calibration factor for one or more experimental values as discussed above as claimed. Claims 29-30 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (USPN 20130034203 A1; Pub.Date 02/07/2013; Fil.Date 08/01/2012; hereafter Wang’2013), as evidenced by Wang et al. (2012 IEEE Trans. Med. Imaging 31:88-102; Pub.Date 2012; hereafter Wang’2012a) and Wang et al. (2012 Mol. Imaging Biol. 14:408-419; Pub.Date 2012; hereafter Wang’2012b), in view of Kriston et al. (USPN 20150317785 A1; Pub.Date 11/05/2015; Fil.Date 07/15/2015), in view of Nisnevich et al. (USPN 7588314 B1; Pat.Date 09/15/2009; Fil.Date 01/26/2006) in view of Walny et al. (FOR DE 102013200210 B3; Pub.Date 12/06/2014; Fil.Date 01/09/2013) and in view of Gunes et al. (2016 SIViP 10:853–860; Pub Date 10/2015) as applied to claim 21 and further in view of Masaki (FOR KR 20160003181 A; Pub.Date 01/08/2016; Fil.Date 03/26/2014) and Huang et al. (2006 Proc. IEEE Internat. Symp. BioMed. Imaging from Nano to Macro 2006 7 pages; Pub.Date 2006). Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Gunes teach a system as set forth above. Walny teaches also that the rotation position measurement system can utilize an optical sensor (p.26 last ¶ Figs.1-4) for imaging a target attached to the rotating shaft/device (Figs.1-4). Regarding claim 29, Wang’2013, while teaching imaging the holder with the animal immobilized within at different rotation angles, does not teach the position detector is further configured to identify the first position by detecting, within the first image data, an identifiable mark associated with the first position. However, Masaki teaches within the same field of endeavor of controlling and measuring the rotation angle of a rotating shaft (Title, abstract and Figs.6-7) the use of a cylinder with an engraved encoder scale for measuring the rotation position with an optical sensor with the cylinder being coaxially fixed on the rotating shaft (p.11 3rd ¶ and Fig.6). Therefore one person of ordinary skill in the art would have recognize as obvious that this encoding element as being more direct to apply to a rotating specimen holder as part of it than the conventional encoder provided by Nisnevich since it can provide a direct measure of the rotation position of the holder when booth holder and engraved encoder are imaged together since Wang’2013 is already teaching the imaging of the holder and the placing the holder at different angles. Additionally, Huang teaches within the same field of endeavor of optically imaging test animal (Title and abstract) reconstructing images (abstract) from 2D images of mice placed in 50ml tube with the animal embedded within a soft foam (p.2 col.2 3rd ¶ “For small animals such as mice, a 50 ml tube cut at both ends and the bottom can be used as a holder. The anesthetized animal fits easily in the tube and can be placed in the imaging device without any discomfort. The animal can be rotated similar to the phantom-well images and 32 rotational images can be acquired. An added advantage of the 50 ml tube is that it can be fitted with a soft foam to make the animal fit snugly in the tube, and the outside of the tube can be marked with fiduciary markers for anatomical reference”) the tube and foam therefore reading on a mold with identifiable marks to place the test animal within the field of imaging and then rotating the animal within the mold to acquire a time series of images (p.2 col.2 3rd ¶ “After the animal to be imaged is inserted into a cylindrical 50 ml tube, images are acquired at every rotation stage clockwise from the vertical axis. This generates a series of images including the one without any rotation. Fig.1 shows some example BLI images of a mouse with tumor in the abdomen area”) therefore reading at least a portion of the mold being shown in the first image data, wherein Masaki teaches the marking as being engraved on the support as part of the mold of the animal holder therefore teaching the position detector is further configured to identify the first position by detecting, within the first image data, an identifiable mark associated with the first position as claimed. Therefore it would have been obvious for a person of ordinary skill in the art before the time of filling the invention to have adapted the system of Wang’2013 with evidential references Wang’2012a and Wang’2012b and modified in view of Kriston, Nisnevich and Walny such that the system comprises: the position detector is further configured to identify the first position by detecting, within the first image data, an identifiable mark associated with the first position, since one of ordinary skill in the art would recognize that using an engraved rotation position measurement encoder with markings on the holder shaft and having the engraving placed on the animal positioning mold were known in the art as taught by Masaki and Huang and since Wang’2013 and Huang are already teaching the imaging of the holder and the placing the holder at different angles. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Wang’2013 and Masaki teach the use of optical imaging for image analysis for rotating holder. The motivation would have been to provide a simpler and more accurate measure of the rotation position for imaging processing without extra registration, as suggested by Masaki (p.11 3rd ¶). Regarding the dependent claim 30, all the elements of these claims are instantly disclosed or fully envisioned by the combination of Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny, Gunes, Masaki and Huang. Regarding claim 30, as discussed above for claim 29, Huang and Masaki teach the identifiable marks are placed on the mold and visible in the images therefore teaching the identifiable mark is identified on a mold in which the animal subject was placed to capture the first image data, at least a portion of the mold being shown in the first image data as claimed. Claims 31, 32 and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (USPN 20130034203 A1; Pub.Date 02/07/2013; Fil.Date 08/01/2012; hereafter Wang’2013), as evidenced by Wang et al. (2012 IEEE Trans. Med. Imaging 31:88-102; Pub.Date 2012; hereafter Wang’2012a) and Wang et al. (2012 Mol. Imaging Biol. 14:408-419; Pub.Date 2012; hereafter Wang’2012b), in view of Kriston et al. (USPN 20150317785 A1; Pub.Date 11/05/2015; Fil.Date 07/15/2015), in view of Nisnevich et al. (USPN 7588314 B1; Pat.Date 09/15/2009; Fil.Date 01/26/2006) in view of Walny et al. (FOR DE 102013200210 B3; Pub.Date 12/06/2014; Fil.Date 01/09/2013) and in view of Gunes et al. (2016 SIViP 10:853–860; Pub Date 10/2015) as applied to claims 21 and 33, and further in view of Huang et al. (2006 Proc. IEEE Internat. Symp. BioMed. Imaging from Nano to Macro 2006 7 pages; Pub.Date 2006). Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Gunes teach a system as set forth above. Walny teaches also that the rotation position measurement system can utilize an optical sensor (p.26 last ¶ Figs.1-4) for imaging a target attached to the rotating shaft/device (Figs.1-4). Regarding claim 31, Wang’2013 teaches an imaging controller ([0072] “The registration time for each combination was ...130 sec based on a PC of 3.05 GHz CPU and 5.99 GB RAM”) configured to: receive, from an imaging device, information at a third time identifying the animal subject to be imaged (Fig. 9 performing the imaging at 0, 45, 90 and 135° angles). However, Wang’2013 does not specifically teach to perform these different angles within the same sequence or combination as to identify a third position definition for the animal subject, the third position definition identifying, for the animal subject imaged while in a third position, a third location for the anatomical feature; generate a configuration command indicating sensor parameters for imaging the animal subject using the third position definition and the processing protocol; and transmit the configuration command to the imaging device as in claim 31. However, Huang teaches to perform the imaging of the test animal with a sequence of more than two image sequence (p.2 col.2 3rd ¶ “The animal can be rotated similar to the phantom-well images and 32 rotational images can be acquired”) therefore teaching to modify Wang’2013 to perform at least a third imaging at a different angle such as the angles Wang’2013 teaches such as relative rotation angle of 0°, 45°, 90° (Fig.9) therefore teaching the same imaging for 45° than those at 0° and 90° reading on identify a third position definition for the animal subject, the third position definition identifying, for the animal subject imaged while in a third position, a third location for the anatomical feature. The imaging acquisition is taught already by Wang’2013 (Fig. 9 operating at 45° imaging) and the imaging controller already set to command to perform imaging at 45° angle, teach to generate a configuration command indicating sensor parameters for imaging the animal subject using the third position definition and the processing protocol; and transmit the configuration command to the imaging device as claimed. Therefore it would have been obvious for a person of ordinary skill in the art before the time of filling the invention to have adapted the system of Wang’2013 with the evidential references Wang’2012a and Wang’2012b modified in view of Kriston, Nisnevich, Walny and Gunes such as the system is configured to identify a third position definition for the animal subject, the third position definition identifying, for the animal subject imaged while in a third position, a third location for the anatomical feature; to generate a configuration command indicating sensor parameters for imaging the animal subject using the third position definition and the processing protocol; and to transmit the configuration command to the imaging device, since one of ordinary skill in the art would recognize that performing imaging of a test animal at more than two angles was known in the art, as taught by Huang, since identifying the a third position definition identifying a third location of the different targeted organs to image while the animal is imaged in a third position was known in the art as taught by Wang’2013 and since using an image controller to control the image acquisition of the imaging device using configuration command to perform the imaging was also known in the art as taught by Wang’2013 (Fig.9). One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Wang’2013, Nisnevich, Walny and Huang are all teaching the use of optical imaging for tracking the location and position of object or subject. The motivation would have been to provide additional data for improving the image reconstruction and co-registration method, as suggested by Huang (abstract). Regarding the dependent claim 32, all the elements of these claims are instantly disclosed or fully envisioned by the combination of Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny, Gunes and Huang. Regarding claim 32, Wang’2013 does not teach the imaging controller is further configured to generate the configuration command using imaging results for the animal subject stored before the third time as in claim 32. However, Huang teaches that his system and method can be used for monitoring the tumor growth within the test animal (p. 7 col.1 last ¶ “This is the first image-based BLI reconstruction method presented, to the best of our knowledge, and the simplicity and efficiency of our framework gives it great potential in studying ... tumor growth..”) therefore teaching repeating the same time series at different times for the same animal, same target organ location in same positions and different times, therefore teaching to generate the configuration command using imaging results for the animal subject stored before the third time as claimed in order to assess the increase in volume of the tumor from the previous analysis. Therefore it would have been obvious for a person of ordinary skill in the art before the time of filling the invention to have adapted the system of Wang’2013 with the evidential references Wang’2012a and Wang’2012b modified in view of Kriston, Nisnevich, Walny, Gunes and Huang such as the system is configured to generate the configuration command using imaging results for the animal subject stored before the third time, since one of ordinary skill in the art would recognize that repeating the imaging of a test animal for more than one was known in the art in order to study tumor growth in organs, as taught by Huang. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Wang’2013, Nisnevich, Walny and Huang are all teaching the use of optical imaging for tracking the location and position of object or subject. The motivation would have been to monitor the growth of tumor and its response to therapy, as suggested by Huang (p. 7 col. 1 last ¶). Regarding claim 39, Wang’2013 with the evidential references Wang’2012a and Wang’2012b modified in view of Kriston, Nisnevich, Walny and Gunes does not specifically teach the imaging device is configured to capture image data of the given subject while the given subject is positioned within an optically transparent animal mold as in claim 39. However, as discussed above, Huang teaches within the same field of endeavor of optically imaging test animal (Title and abstract) reconstructing images (abstract) from 2D images of mice placed in 50ml tube with the animal embedded within a soft foam (p.2 col.2 3rd ¶ “For small animals such as mice, a 50 ml tube cut at both ends and the bottom can be used as a holder. The anesthetized animal fits easily in the tube and can be placed in the imaging device without any discomfort. The animal can be rotated similar to the phantom-well images and 32 rotational images can be acquired. An added advantage of the 50 ml tube is that it can be fitted with a soft foam to make the animal fit snugly in the tube, and the outside of the tube can be marked with fiduciary markers for anatomical reference”) the tube and foam reading on a mold to place the test animal within the field of imaging and then rotating the animal with the mold to acquire a time series of images (p.2 col.2 3rd ¶ “After the animal to be imaged is inserted into a cylindrical 50 ml tube, images are acquired at every rotation stage clockwise from the vertical axis. This generates a series of images including the one without any rotation. Fig.1 shows some example BLI images of a mouse with tumor in the abdomen area”) with Huang teaching also that the 50Ml tube with the foam is optically transparent (Fig. 1 wherein the mouse is visible on the display, therefore the mold holding the mouse is optically transparent) there teaching the imaging device is configured to capture image data of the given subject while the given subject is positioned within an optically transparent animal mold as claimed. Therefore it would have been obvious for a person of ordinary skill in the art before the time of filling the invention to have adapted the system of Wang’2013 with the evidential references Wang’2012a and Wang’2012b modified in view of Kriston, Nisnevich, Walny and Gunes such as the system is configured with the imaging device as configured to capture image data of the given subject while the given subject is positioned within an optically transparent animal mold, since one of ordinary skill in the art would recognize that using a cylindrical tube with soft foam to create a holder mold for a test animal to be image optically by rotating the holder around the craniocaudal axis of the animal was known in the art, as taught by Huang. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Wang’2013 and Nisnevich, Walny and Huang are all teaching the use of optical imaging for tracking the location and position of object or subject. The motivation would have been to provide a fitting optically transparent holder for a test mouse without discomfort for the mouse with the outside of the holder providing a space to place markers for tracking the rotation of the tube/mouse, as suggested by Huang (p.2 col.2 3rd ¶). Claims 36-37 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (USPN 20130034203 A1; Pub.Date 02/07/2013; Fil.Date 08/01/2012; hereafter Wang’2013), as evidenced by Wang et al. (2012 IEEE Trans. Med. Imaging 31:88-102; Pub.Date 2012; hereafter Wang’2012a) and Wang et al. (2012 Mol. Imaging Biol. 14:408-419; Pub.Date 2012; hereafter Wang’2012b), in view of Kriston et al. (USPN 20150317785 A1; Pub.Date 11/05/2015; Fil.Date 07/15/2015), in view of Nisnevich et al. (USPN 7588314 B1; Pat.Date 09/15/2009; Fil.Date 01/26/2006) in view of Walny et al. (FOR DE 102013200210 B3; Pub.Date 12/06/2014; Fil.Date 01/09/2013) and in view of Gunes et al. (2016 SIViP 10:853–860; Pub Date 10/2015) as applied to claims 21 and 33 and further in view of in view of Masaki (FOR KR 20160003181 A; Pub.Date 01/08/2016; Fil.Date 03/26/2014) in view of Huang et al. (2006 Proc. IEEE Internat. Symp. BioMed. Imaging from Nano to Macro 2006 7 pages; Pub.Date 2006).in view of Troy et al. (2004 Molecular Imaging 3:9-23; Pub.Date 2004). Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Gunes teach a system as set forth above. As discussed above, Gunes teaches the generation of grayscale images defined with averaged color intensity, Gunes teaches wherein the image processor is further configured to: transform a spatial distribution of color intensities into a map of intensity probabilities for each individual image and Wang’2013 is also teaching imaging the holder with the animal immobilized within at different rotation angles, Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny and Gunes do not specifically teach the position detector is further configured to identify the first position by detecting, within the first image data, an identifiable mark associated with the first position, further includes an optical calibrator enabling the translation of a physical quantity to a biological quantity as in claim 36. However, Masaki teaches within the same field of endeavor of controlling and measuring the rotation angle of a rotating shaft (Title, abstract and Figs.6-7) the use of a cylinder with an engraved encoder scale for measuring the rotation position with an optical sensor with the cylinder being coaxially fixed on the rotating shaft (p.11 3rd ¶ and Fig.6). Therefore one person of ordinary skill in the art would have recognize as obvious that this encoding element as being more direct to apply to a rotating specimen holder as part of it than the conventional encoder provided by Nisnevich since it can provide a direct measure of the rotation position of the holder when booth holder and engraved encoder are imaged together since Wang’2013 is already teaching the imaging of the holder and the placing the holder at different angles. Additionally, Huang teaches within the same field of endeavor of optically imaging test animal (Title and abstract) reconstructing images (abstract) from 2D images of mice placed in 50ml tube with the animal embedded within a soft foam (p.2 col.2 3rd ¶ “For small animals such as mice, a 50 ml tube cut at both ends and the bottom can be used as a holder. The anesthetized animal fits easily in the tube and can be placed in the imaging device without any discomfort. The animal can be rotated similar to the phantom-well images and 32 rotational images can be acquired. An added advantage of the 50 ml tube is that it can be fitted with a soft foam to make the animal fit snugly in the tube, and the outside of the tube can be marked with fiduciary markers for anatomical reference”) the tube and foam therefore reading on a mold with identifiable marks to place the test animal within the field of imaging and then rotating the animal within the mold to acquire a time series of images (p.2 col.2 3rd ¶ “After the animal to be imaged is inserted into a cylindrical 50 ml tube, images are acquired at every rotation stage clockwise from the vertical axis. This generates a series of images including the one without any rotation. Fig.1 shows some example BLI images of a mouse with tumor in the abdomen area”) therefore reading at least a portion of the mold being shown in the first image data, wherein Masaki teaches the marking as being engraved on the support as part of the mold of the animal holder therefore teaching the position detector is further configured to identify the first position by detecting, within the first image data, an identifiable mark associated with the first position as claimed. Therefore it would have been obvious for a person of ordinary skill in the art before the time of filling the invention to have adapted the system of Wang’2013 with evidential references Wang’2012a and Wang’2012b and modified in view of Kriston, Nisnevich, Walny and Gunes such that the system comprises: the position detector is further configured to identify the first position by detecting, within the first image data, an identifiable mark associated with the first position, since one of ordinary skill in the art would recognize that using an engraved rotation position measurement encoder with markings on the holder shaft and having the engraving placed on the animal positioning mold were known in the art as taught by Masaki and Huang and since Wang’2013 and Huang are already teaching the imaging of the holder and the placing the holder at different angles. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Wang’2013 and Masaki teach the use of optical imaging for image analysis for rotating holder. The motivation would have been to provide a simpler and more accurate measure of the rotation position for imaging processing without extra registration, as suggested by Masaki (p.11 3rd ¶). Additionally, Troy teaches within the same field of endeavor of imaging animal model for medical applications (Title, abstract) the use of an optical system (Fig.2) for optically scanning the animal model for bioluminescence signals with the instrument being calibrated in physics units of radiance and further calibrated to convert these units into biological units representing the number of cells presenting bioluminescence (abstract, p.12 col.1 2nd ¶, p.13-14 ¶ Instrument Calibration and Fig. 9 providing the physical signal into biological activity for bioluminescence/fluorescence calibration as applied to animal images as in Figs. 12-13) therefore teaching, since one of ordinary skill in the art would have recognize that separate programs could be executed within the same processor using an optical sensor/camera for capturing optical radiance as image of the mouse and since that the position detector is also set with a processor processing the optical signal for the position of the animal support as discussed above, an optical calibrator (as a processor) enabling the translation of a physical quantity (radiance as bioluminescence/fluorescence from the optical sensor) to a biological quantity (positively labeled cell count for counting) as claimed. Therefore it would have been obvious for a person of ordinary skill in the art before the time of filling the invention to have adapted the system of Wang’2013 with evidential references Wang’2012a and Wang’2012b and modified in view of Kriston, Nisnevich, Walny, Gunes, Masaki and Huang such that the system further comprises: the system further includes an optical calibrator enabling the translation of a physical quantity to a biological quantity, since one of ordinary skill in the art would recognize that using an optical system for acquiring in-vivo bioluminescence/biofluorescence with a processor for converting the radiation into a calibrated bioluminescent/fluorescent cell count was known in the art as taught by Troy. One of ordinary skill in the art would have expected that this modification could have been made with predictable results since both Wang’2013 and Troy are teaching the use of optical imaging for image analysis for analyzing animal model. The motivation would have been to provide a calibrated system for accessing in-vivo monitoring of biological activity in animal model for scientific and medical discovery, as suggested by Troy (abstract). Regarding the dependent claim 37, all the elements of these claims are instantly disclosed or fully envisioned by the combination of Wang’2013, Wang’2012a, Wang’2012b, Kriston, Nisnevich, Walny, Masaki, Huang and Troy. Regarding claim 37, as discussed above, Masaki and Huang teach the identifiable mark being identified on a positioning assembly with engraved encoding markings on the animal mold holder being rotated which is used for rotation position for imaging the whole set of the animal holder therefore teaching the identifiable mark is identified on a positioning assembly in which the given subject was placed to capture the image data, at least a portion of the positioning assembly being shown in the image data as claimed and Wang’2013 teaches the image receiver is further configured to receive the first image data from a second sensing device (Fig. 7A-B with receiving the coronal image (0° angle) from X-ray and/or optical camera) and Troy teaches also the use of additional optical camera for bioluminescence/fluorescence imaging and analysis wherein the calibration of the instrument shows a proportionality factor between the bioluminescence/fluorescence magnitude and the cell count (Fig. 9) teaching therefore the optical calibrator generates a calibration factor for one or more experimental values as discussed above as claimed. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK M MEHL whose telephone number is (571)272-0572. The examiner can normally be reached Monday-Friday 9AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEITH M RAYMOND can be reached at (571) 270-1790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PATRICK M MEHL/ Examiner, Art Unit 3798 /KEITH M RAYMOND/ Supervisory Patent Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Mar 02, 2020
Application Filed
Aug 15, 2022
Non-Final Rejection — §103
Jan 17, 2023
Response Filed
Apr 17, 2023
Final Rejection — §103
Sep 21, 2023
Request for Continued Examination
Oct 06, 2023
Response after Non-Final Action
Feb 01, 2024
Non-Final Rejection — §103
Jun 10, 2024
Response Filed
Jul 10, 2024
Interview Requested
Jul 18, 2024
Applicant Interview (Telephonic)
Jul 18, 2024
Examiner Interview Summary
Aug 19, 2024
Final Rejection — §103
Nov 22, 2024
Request for Continued Examination
Nov 25, 2024
Response after Non-Final Action
Dec 12, 2024
Non-Final Rejection — §103
Mar 17, 2025
Response Filed
May 27, 2025
Final Rejection — §103
Nov 26, 2025
Request for Continued Examination
Dec 16, 2025
Response after Non-Final Action
Dec 31, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12569140
FIBER-BASED MULTIMODAL BIOPHOTONIC IMAGING AND SPECTROSCOPY SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12504678
PROJECTION DEVICE
2y 5m to grant Granted Dec 23, 2025
Patent 12411399
PROJECTION SYSTEM AND PROJECTOR
2y 5m to grant Granted Sep 09, 2025
Patent 9653512
SOLID-STATE IMAGE PICKUP DEVICE AND ELECTRONIC APPARATUS USING THE SAME
2y 5m to grant Granted May 16, 2017
Patent 9642149
USER SCHEDULING METHOD, MASTER BASE STATION, USER EQUIPMENT, AND HETEROGENEOUS NETWORK
2y 5m to grant Granted May 02, 2017
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
65%
Grant Probability
46%
With Interview (-19.1%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 418 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month