DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice of Pre-AIA or AIA Status
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 2/29/2024 complies with the provisions of 37 CFR 1.97. Accordingly, the examiner considered the information disclosure statement.
Election/Restrictions
Applicant's election of group 1 (claims 1-7) without traverse in the reply filed on 11/11/2025 is acknowledged, Claim 8 is withdrawn as being drawn to a non-elected group and claims 1-7 are examined herein.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, the limitations/terms of “the stabilized image,” (line 17) is vague and lacks antecedent basis. As there are two the same of term “a stabilized retinal image” (line 1 and line 19), and further, in claim 4, in line 2, cited “the stabilized image”, but nowhere in claim 1 cites “the stabilized image”, it is unclear the term “the stabilized image” that is the relation between the two stabilized retinal image. Futter, the term “a light source” cited in line 17 and 20, is vague and renders the claims indefinite, it is not clear that the two light source (line 17 and 20) are the same or not, for example, as shown in Applicant’s withdrawn of claim 8, in line 20, cited “by the light source”; thus, tor the purposes of examination the above, as the claim will be treated as “a stabilized image” (line 17), “the stabilized retinal image” (line 20) and “the light source” (line 20). It is suggested to amend the claim and provide explanations in order to remove the indefiniteness issues.
Claims 2-7 are rejected as containing the deficiencies of claim 1 through their dependency from claim 1.
Therefore proper amendments are required in order to clarify the scopes of the claims and overcome the rejections.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-7 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chinnock et al. (US20130033593).
Regarding claim 1, Chinnock teaches a system for capturing a stabilized retinal image of an eye of a subject by a handheld retinal imaging device (Chinnock, figs.1-18, abstract, a camera for capturing an image of an object, for example an eye; paragraph [0005], the preferred embodiment of the portable retinal camera; paragraph [0034], in FIG. 1, a portable retinal camera), the system comprising:
the handheld retinal imaging device (paragraph [0034], in FIG. 1, a portable retinal camera); and
an external computing device (figs.15-16, paragraph [0032-33], FIG. 15 is an electronics block diagram of an embodiment of the camera; FIG. 16 is a functional flow diagram illustrating an exemplary method of image acquisition) configured for:
receiving a user selection (fig.16, paragraph [0126], A115—The user selects images to be transmitted to another memory device) from a user (fig.16, paragraph [0107] the user), the user selection comprising a type of image to be captured (fig.16, paragraph [0125], A110 --The user reviews captured images which ones to keep);
conveying data on an initial position of a fixation stimulus (figs.15-16, paragraph [0128] a fixation target may be displayed to the patient to help stabilize the retinal position during coarse and fine alignment; paragraph [0082], As shown in FIG. 15 , On system power-up, digital initialization commands are sent to the camera(s) to set their operating modes; paragraph [0126], A115—The user selects images to be transmitted to another memory device or telecommunications network) on a fixation screen (paragraph [0128], a fixation target may be displayed to the patient to help stabilize the retinal position during coarse and fine alignment) based on a user selection (fig.16, A110, on a user selection);( also see Chinnock, paragraphs [0114]-[116], thus Chinnock teaches conveying data on an initial position of a fixation stimulus on a fixation screen based on a user selection);
receiving video data of the eye of the subject from a camera and position data from a 3-D accelerometer and displaying a video based on the video data received from the camera (see paragraphs [0070], [0074],[0088], same function of receiving video data of the eye of the subject from a camera and position data from a 3-D accelerometer and displaying a video based on the video data received from the camera; paragraph [0070], FUNcam 300 is the primary image capture device in PRC 10; paragraph [0074], The image processing system 600 calculates this shift vector on a frame-to-frame basis by computing; paragraph [0088], Accelerometer 625 …this data be recorded);
calculating a new position for the fixation stimulus on the fixation screen (paragraph [0051] this algorithm continually calculates a focus figure-of-merit as the user attempts to focus PRC 10 by watching the image on display screen 120 while moving PRC 10 towards and away from the patient), based on one or more of, the user selection (fig.16, select images), the video data received from the camera and the position data received from the 3-D accelerometer (paragraph [0088,0075], Accelerometer 625);
analyzing one or more frames of the video for determining, using artificial intelligence techniques (paragraph [0119], P220—In previous steps, in the preferred embodiment, the system processed ANTcam images to achieve “coarse alignment” and “coarse focus”. In step 220, the system processes the FUNcam video stream to achieve fine focus and alignment), a stable image of the eye of the subject (paragraph [0075]The presence of a fixation target makes it much easier for the patient to maintain his eye a steady position);
on sensing a stabilized image, conveying a trigger to a light source of the handheld retinal imaging device for illuminating the eye of the subject and conveying a trigger to the camera for capturing a stabilized retinal image, illuminated by a light source (Chinnock, fig.16, P230, paragraph [0122,] P230—When pre-set conditions for focus and alignment are satisfied, and the correct exposure is calculated, the system automatically turns off the autofocus illumination, turns on the white illumination, and captures a burst of images, i.e., a short video clip).
Regarding claim 2, Chinnock discloses the invention as described in Claim 1 and Chinnock further teaches wherein the data on the initial position of the fixation stimulus on the fixation screen (paragraph [0128], a fixation target may be displayed to the patient to help stabilize the retinal position during coarse and fine alignment) is determined by the computing device (paragraph [0128], Method of acquiring images..) based on the user selection and corresponding predetermined data loaded on the computing device (paragraph [0119] very rapidly corrects the image shift caused by inadvertent movement, i.e., returns the image to its prior location on the FUNcam image sensor 342).
Regarding claim 3, Chinnock discloses the invention as described in Claim 1 and Chinnock further teaches wherein the external computing device calculates data (paragraph [0119] the shift vectors calculated in steps P190/U200/P200) for a new position for the fixation stimulus on the fixation screen (paragraph [0119]very rapidly corrects the image shift caused by inadvertent movement, i.e., returns the image to its prior location on the FUNcam image sensor 342) is based on analyzing using artificial intelligence techniques (paragraph [0119],In step 220, the system processes the FUNcam video stream), the video data of the eye of the subject received from the camera and position data from the 3-D accelerometer (paragraph [0088], Accelerometer 625 comprises a chip and firmware driver. Accelerometer 625 senses the orientation and movement of the device in 3 axes.; Accelerometer data may also be used to determine PRC 10's orientation, and thus the patient's orientation during image capture. This data may be recorded for forensic purposes).
Regarding claim 4, Chinnock discloses the invention as described in Claim 1 and Chinnock further teaches wherein the computing device senses a stabilized image based on at least a frame of the video data received from the camera and using artificial intelligence techniques (Chinnock, paragraph [0119] Because the frame rate of ANTcam 350 can be so high compared to the frame rate of FUNcam 300, image shifts can be corrected during a single FUNcam 300 exposure, eliminating or minimizing the effects of motion blur, and improving the sharpness of captured images. ; also see paragraph [0032], FIG. 15 is an electronics block diagram of an embodiment of the camera; paragraph [0074],the image processing system 600 calculates this shift vector on a frame-to-frame basis by computing factors relating to the patient's pupil in the ANTcam 350 video stream, such as the centroid of pupil edge segments).
Regarding claim 5, Chinnock discloses the invention as described in Claim 1 and Chinnock further teaches wherein the user selection is a type of image selected from a group of types of images including, but not limited to, macula centered, optic disc centered, peripheral up, peripheral down, peripheral left diagonal, and peripheral right diagonal (Chinnock, paragraph [0120] precise alignment in X or Y axes is essential. Auto alignment is achieved by detecting artifacts in the FUNcam video images that result from misalignment in these axes ;; paragraph [0116],Tilt of PRC 10 relative to the optical axis of the eye is also determined using edge detection of the pupil. When the user is looking at the central fixation target, i.e., along PRC 10's optical axis, this is represented by showing an incomplete overlaid circle. The user then re-instructs the patient to look at the fixation target).
Regarding claim 6, Chinnock discloses the invention as described in Claim 1 and Chinnock further teaches wherein the handheld retinal imaging device comprising:
an infrared light source for illuminating the eye of the subject, through illuminating optics configured for illuminating the eye of the subject (paragraph [0012], also featured is a camera for capturing an image of the fundus of a human eye, comprising a first light source that emits deep red or near infrared light);
the source of visible light for illuminating the eye of the subject, through the illuminating optics, for a predefined duration, based on a trigger received from the external computing device (see Chinnock, fig.16, P230, WHEN CONDITIONS ARE SATISFIED, TURN OFF AUTOFOCUS ILLUMINATION AND TRIGGER FUNCAM WHITE FLASH AND IMAGE CAPTURE);
the fixation screen configured for displaying the fixation stimulus for the subject to gaze at during the use of the device, the position of the fixation stimulus on the fixation screen being based on an input received from the external computing device (paragraph [0075], Fixation Target Optical Path—Some embodiments of PRC 10 comprise a fixation target 482 towards which the patient is instructed to direct his gaze. See, for example, the embodiment illustrated in FIG. 6. The presence of a fixation target makes it much easier for the patient to maintain his eye a steady position. Additionally, having a target to view also helps the patient maintain a constant focus, say, at infinity); and
the 3-D accelerometer for sensing data corresponding to a movement of the handheld device and conveying it to the external computing device (see Chinnock, this claim recites similar limitations as those in corresponding the claim 1 and is rejected based on the same teachings and rationale); and
a camera configured for (paragraph [0082], As shown in FIG. 15 Optical Bench 500 supports the PRC's illumination and imaging functions. It comprises one or more cameras, such as an ANTcam 350 and a FUNcam 300, The cameras typically communicate with microprocessor/controller 600 over a standard serial bus): capturing video data of the eye of the subject and conveying the video data to the external device (see Chinnock, this claim recites similar limitations as those in corresponding the claim 1 and is rejected based on the same teachings and rationale); and
capturing the retinal image of the eye of the subject, based on a trigger received from the external computing device (figs.1-2, paragraph [0040], each handle 122 has a thumb-operable trigger 125 for initiating functions such as image capture) and conveying retinal image data to the external computing device (figs.15-16, external computing device).
Regarding claim 7, Chinnock discloses the invention as described in Claim 6 and Chinnock further teaches wherein the locations of the infrared light source and the source of visible light are interchanged with corresponding changes to the illuminating optics (paragraph [0129], Illuminating the patient's retina with a deep red or near IR illuminator; activating a white light illumination source, said source directed to the retina).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUEI-JEN. EDENFIELD whose telephone number is (571)272-3005. The examiner can normally be reached on Mon-Thurs 8:00 AM - 5:30 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Pham can be reached on 571-272-3689. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/KUEI-JEN L EDENFIELD/
Examiner, Art Unit 2872
/THOMAS K PHAM/Supervisory Patent Examiner, Art Unit 2872