DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Allowable Subject Matter
Claims 6-11 are currently subject to non-statutory double patent rejections, but are otherwise not subject to any prior art rejections under either 35 U.S.C. § 102 or 35 U.S.C. § 103. Assuming that the foregoing shortcomings of these claims were rectified by the timely filing of a terminal disclaimer, these claims would be allowable.
The following is a statement of reasons for the indication of allowable subject matter:
Claims 6 recite the same patentable features as were found allowable in parent application no. 17/318233, which issued as United States Patent No. 11,972,544 on 30 April 2024. Claims 6, and its dependant claims, are allowable for the same reasons as were provided in the parent application.
With regards to claims 7-11, these claims depend from claim 6 and therefore incorporate the features of that claim that were found allowable.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 1 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 1
U.S. Patent No. 11,972,544
Claim 1
An imaging method comprising:
An imaging method comprising:
inputting redundant optical coherence tomography (OCT) data of an object to at least one machine learning system trained to identify structural vasculature information or blood flow information in the redundant OCT data;
inputting the redundant OCT data to at least one trained machine learning system; … wherein the trained machine learning system is trained to identify structural vasculature information and blood flow information in the redundant OCT data
registering or merging the redundant OCT data by the at least one trained machine learning system;
registering or merging the redundant OCT data by the at least one trained machine learning system;
generating a visualization of the structural vasculature information or blood flow information based on an output of the at least one trained machine learning system.
generating three-dimensional (3D) OCT angiography (OCTA) data by the at least one trained machine learning system, the 3D OCTA data being an output from the at least one trained machine learning system, …, and wherein the 3D OCTA data comprises the identified structural vasculature information and the identified blood flow information.
Claim 2 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 2
U.S. Patent No. 11,972,544
Claim 1
The method of claim 1, wherein the redundant OCT data is obtained with a Lissajous, orthogonal, or parallel scanning pattern.
obtaining redundant optical coherence tomography(OCT) data of an object with a Lissajous, orthogonal, or parallel scanning pattern;
Claim 3 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 2 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 3
U.S. Patent No. 11,972,544
Claim 2
The method of claim 1, wherein the redundant OCT data is a plurality of B-frames from a
same location of the object.
The method of claim 1, wherein the redundant OCT data is a plurality of B-frames from a same location of the object.
Claim 4 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 3 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 4
U.S. Patent No. 11,972,544
Claim 3
The method of claim 1, wherein the redundant OCT data is a plurality of volumes of the
object.
The method of claim 1, wherein the redundant OCT data is a plurality of volumes of the object.
Claim 5 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 4 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 5
U.S. Patent No. 11,972,544
Claim 4
The method of claim 1, wherein the redundant OCT data comprises a plurality of scan pairs
of OCT data from a same location of the object.
The method of claim 1, wherein the redundant OCT data comprises a plurality of scan pairs of OCT data from a same location of the object.
Claim 6 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 6
U.S. Patent No. 11,972,544
Claim 1
The method of claim 1, further comprising:
An imaging method comprising:
generating three-dimensional (3D) OCT angiography (OCTA) data, the 3D OCTA data being an output from the at least one trained machine learning system,
generating three-dimensional (3D) OCT angiography (OCTA)data by the at least one trained machine learning system, the 3D OCTA data being an output from the at least one trained machine learning system, …
wherein the 3D OCTA data comprises the identified structural vasculature information and the identified blood flow information.
wherein the 3D OCTA data comprises the identified structural vasculature information and the identified blood flow information.
Claim 7 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 5 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 7
U.S. Patent No. 11,972,544
Claim 5
The method of claim 6, wherein the at least one machine trained machine learning system
is a single machine learning system trained to register or merge the redundant OCT data and to
output the 3D OCTA data.
The method of claim 1, wherein the at least one machine trained machine learning system is a single machine learning system trained to register or merge the redundant OCT data and to output the 3D OCTA data.
Claim 8 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 6 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 8
U.S. Patent No. 11,972,544
Claim 6
The method of claim 6, wherein the at least one trained machine learning system comprises:
The method of claim 1, wherein the at least one trained machine learning system comprises:
a first trained machine learning system configured to receive the redundant OCT data and to output registered or merged OCT data; and
a first trained machine learning system configured to receive the redundant OCT data and to output registered or merged OCT data; and
a second trained machine learning system configured to receive the registered or merged
OCT data and to output the 3D OCTA data.
a second trained machine learning system configured to receive the registered or merged OCT data and to output the 3D OCTA data.
Claim 9 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 8 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 9
U.S. Patent No. 11,972,544
Claim 8
The method of claim 6, wherein ground truth training data for the at least one machine
learning system comprises OCTA B-frames generated by determining a difference or ratio
between repeated OCT scans at a same location of the object.
The method of claim 1, wherein ground truth training data for the at least one machine learning system comprises OCTA B-frames generated by determining a difference or ratio between repeated OCT scans at a same location of the object.
Claim 10 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 10 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 10
U.S. Patent No. 11,972,544
Claim 10
The method of claim 6, further comprising:
The method of claim 1, further comprising:
obtaining second 3D OCTA data of the object;
obtaining second 3D OCTA data of the object;
inputting the generated 3D OCTA data and the second 3D OCTA to a longitudinal machine learning system trained to register the second 3D OCTA data to the generated 3D OCTA data and output the registered 3D OCTA data; and
inputting the generated 3D OCTA data and the second 3D OCTA to a longitudinal machine learning system trained to register the second 3D OCTA data to the generated 3D OCTA data and output the registered 3D OCTA data; and
determining a structural change in the object based on the registered 3D OCTA data.
determining a structural change in the object based on the registered 3D OCTA data.
Claim 11 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 11 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 11
U.S. Patent No. 11,972,544
Claim 11
The method of claim 6, further comprising:
generating a 3D visualization of the object based on the 3D OCTA data.
The method of claim 1, further comprising: generating a 3D visualization of the object based on the 3D OCTA data.
Claim 12 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 7 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 12
U.S. Patent No. 11,972,544
Claim 7
The method of claim 1, wherein ground truth training data for the at least one machine
learning system comprises results from a Monte Carlo simulation, an enhanced OCT angiography
(OCTA) image, or an image from an imaging modality other than OCT.
The method of claim 1, wherein ground truth training data for the at least one machine learning system comprises results from a Monte Carlo simulation.
Claim 13 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 9 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 13
U.S. Patent No. 11,972,544
Claim 9
The method of claim 1, wherein ground truth training data for the at least one machine
learning system comprises angiographic B-frames or volumes generated from an imaging modality
other than OCT.
The method of claim 1, wherein ground truth training data for the at least one machine learning system comprises angiographic B-frames or volumes generated from an imaging modality other than OCT.
Claim 14 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 12 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 14
U.S. Patent No. 11,972,544
Claim 12
An imaging method comprising:
An imaging method comprising:
obtaining optical coherence tomography (OCT) data from a single scan of the object;
obtaining optical coherence tomography (OCT)data from a single scan of the object;
inputting the OCT data from the single scan to at least one trained machine learning system; and
inputting the OCT data from the single scan to at least one trained machine learning system; and
generating three-dimensional (3D) OCT angiography (OCTA) data by the at least one trained machine learning system, the 3D OCTA data being an output from the at least one trained machine learning system,
generating three-dimensional (3D)OCT angiography (OCTA) data by the at least one trained machine learning system, the 3D OCTA data being an output from the at least one trained machine learning system,
wherein the trained machine learning system is trained to identify structural vasculature information in the OCT data from the single scan, and
wherein the trained machine learning system is trained to identify structural vasculature information in the OCT data from the single scan,
wherein the 3D OCTA data comprises the identified structural vasculature information.
wherein the 3D OCTA data comprises the identified structural vasculature information…
Claim 15 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 13 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 15
U.S. Patent No. 11,972,544
Claim 13
The method of claim 14, wherein the single scan of the object follows a Lissajous,
orthogonal, or parallel scanning pattern.
The method of claim 12, wherein the single scan of the object follows a Lissajous, orthogonal, or parallel scanning pattern.
Claim 16 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 14 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 16
U.S. Patent No. 11,972,544
Claim 14
The method of claim 14, wherein the OCT data comprises a plurality of scan pairs of OCT
data from a same location of the object.
The method of claim 12, wherein the OCT data comprises a plurality of scan pairs of OCT data from a same location of the object.
Claim 17 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 15 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 17
U.S. Patent No. 11,972,544
Claim 15
The method of claim 14, wherein ground truth training data for the at least one machine
learning system comprises results from a Monte Carlo simulation, an enhanced OCTA image, and
an image from an imaging modality other than OCT.
The method of claim 12, wherein ground truth training data for the at least one machine learning system comprises results from a Monte Carlo simulation.
Claim 18 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 12 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 18
U.S. Patent No. 11,972,544
Claim 12
The method of claim 14, wherein ground truth training data for the at least one machine
learning system comprises angiographic B-frames or volumes generated from an imaging modality
other than OCT.
wherein ground truth training data for the at least one machine learning system comprises OCTA B-frames generated by determining a difference or ratio between repeated OCT scansat a same location of the object, or the ground truth training data for the at least onemachine learning system comprises angiographic B-frames or volumes generatedfrom an imaging modality other than OCT
Claim 19 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 16 of U.S. Patent No. 11,972,544. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 19
U.S. Patent No. 11,972,544
Claim 16
The method of claim 14, further comprising:
generating a 3D visualization of the object based on the 3D OCTA data.
The method of claim 12, further comprising: generating a 3D visualization of theobject based on the 3D OCTA data.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 3-5, 13-14, 16, 18-19, are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Jia et al (US PG Pub. No. 2022/0151490)
With regards to claim 1, Jia discloses inputting redundant optical coherence tomography (OCT) data (e.g., “first type of OCT image (e.g., a thickness map) … and… a second type of OCT image (e.g., a reflectance intensity image)…”) of an object to at least one machine learning system trained to identify structural vasculature information in the redundant OCT data at: ¶ [0070]; ¶ [0083]; ¶¶ [0086]-[0088] and FIG. 5A; see, also, ¶¶ [0063]-[0064]; ¶ [0073]; ¶ [0082].
Jia discloses registering or merging the redundant OCT data by the at least one trained machine learning system at ¶ [0070]; ¶¶ [0087]-[0088].
Jia discloses generating a visualization (e.g., “OCTA image 302” and “overlay image 512”) of the structural vasculature information based on an output of the at least one trained machine learning system (e.g., “neural network 502”) at: ¶ [0069](“The trainer 104 may use the training OCT images 106 and the training avascular maps 108 to train a neural network 110. The neural network 110 may be trained to accept an OCT image as an input and output an avascular map…”); ¶ [0086](“the neural network 502 can correspond to the neural network 110…”); ¶ [0088](“According to various implementations, the a+1th subnetwork may output the avascular map 504.”); ¶¶ [0084]-[0085](“The avascular map in the overlay image 400 can be an image with three levels. The first level (e.g., corresponding to the exposed areas of the OCTA image 302) may correspond to vascular regions of the OCT images 300.”); ¶ [0055](“As used herein, the terms ‘vascular,’ ‘perfusion,’ and the like can refer to an area of an image that depicts vasculature. In some cases, a perfusion area can refer to an area that depicts a blood vessel or another type of vasculature.”); see, also, ¶ [0004](“With the availability of commercial Optical CoherenceTomography Angiography (OCTA) technology, ophthalmologists can acquire high quality,three-dimensional images…”) and ¶ [0083](“The B-scan can indicate a threedimensional image of the retina, such that individual retinal layer boundaries can be determined.”)
With regards to claim 3, Jia discloses the redundant OCT data is a plurality of B-frames (e.g., “B-scans”) from a same location of the object at ¶¶ [0063]-[0064]; ¶ [0083]
With regards to claim 4, Jia discloses the redundant OCT data is a plurality of volumes of the object at ¶¶ [0063]-[0064](“B-scans of the retina may be referred to as ‘volumetric data’…”).
With regards to claim 5, Jia discloses OCT data at ¶¶ [0063]-[0064]; ¶ [0083]. Applicant admits that OCT data comprises a plurality of scan pairs of OCT data from a same location of the object at ¶ [0002] of the specification-as-filed. Thus, this limitation is inherent in Jia’s disclosure as evidenced by Applicant’s admission.
With regards to claim 13, Jia discloses ground truth training data for the at least one machine learning system comprises angiographic B-frames at ¶¶ [0063]-[0064].
With regards to claim 14, Jia discloses obtaining optical coherence tomography (e.g., “OCT images 506”) data from a single scan (e.g., “B-scan image”) of the object at ¶¶ [0073], [0082](“The OCTA image 302, the thickness image 304, and the OCT reflectance intensity map 306 may correspond to the same scan (e.g., taken at substantially the same time of the same eye of the same patient).”); [0086]-[0087] (“the subnetworks 508-1 to 508-A may receive individual types of OCT images among the OCT images 506.”) and FIG. 5A.
Jia discloses inputting the OCT data (e.g., “OCT images 506”) from the single scan to at least one trained machine learning system (e.g., “neural network 502”) at ¶¶ [0086]-[0088] and FIG. 5A.
Jia discloses generating three-dimensional (3D) OCT angiography (OCTA) data (e.g., “OCTA image 302” and “overlay image 512”) by the at least one trained machine learning system (e.g., “neural network 502”), the 3D OCTA data being an output from the at least one trained machine learning system, wherein the trained machine learning system (e.g., “neural network 502”) is trained to identify structural vasculature information in the OCT data from the single scan, and wherein the 3D OCTA data comprises the identified structural vasculature information (e.g., “avascular map 504”) at: ¶ [0069](“The trainer 104 may use the training OCT images 106 and the training avascular maps 108 to train a neural network 110. The neural network 110 may be trained to accept an OCT image as an input and output an avascular map…”); ¶ [0086](“the neural network 502 can correspond to the neural network 110…”); ¶ [0088](“According to various implementations, the a+1th subnetwork may output the avascular map 504.”); ¶¶ [0084]-[0085](“The avascular map in the overlay image 400 can be an image with three levels. The first level (e.g., corresponding to the exposed areas of the OCTA image 302) may correspond to vascular regions of the OCT images 300.”); ¶ [0055](“As used herein, the terms ‘vascular,’ ‘perfusion,’ and the like can refer to an area of an image that depicts vasculature. In some cases, a perfusion area can refer to an area that depicts a blood vessel or another type of vasculature.”); see, also, ¶ [0004](“With the availability of commercial Optical CoherenceTomography Angiography (OCTA) technology, ophthalmologists can acquire high quality,three-dimensional images…”) and ¶ [0083](“The B-scan can indicate a three- dimensional image of the retina, such that individual retinal layer boundaries can be determined.”)
With regards to claim 16, the steps performed by the method of this claim are anticipated by Jia for the same reasons as were provided in the discussion of claim 5, which recites an method performing these same steps.
With regards to claim 18, the steps performed by the method of this claim are anticipated by Jia for the same reasons as were provided in the discussion of claim 13, which recites an method performing these same steps.
With regards to claim 19, Jia discloses generating a 3D visualization of the object based on the 3D OCTA data at ¶¶ [0084]-[0085], [0089] and FIG. 4.
(Continued on next page)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 2 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Jia et al (US PG Pub. No. 2022/0151490) in view of Artsyukhovich et al (US PG Pub. No. 2018/0199809).
With regards to claim 2, Jia discloses a single scan (e.g., “B-scan image”) derived from the scanning of an eye at ¶ [0073]; ¶ [0082]; ¶¶ [0086]-[0087], but does not specify the single scan of the object follows a Lissajous, orthogonal, or parallel scanning pattern. However, this limitation was known in the art:
Artsyukhovich discloses a single scan of an eye that follows a Lissajous scanning pattern at ¶¶ [0027]-[0029] and FIGS. 2A-2B; FIGS. 3A-3C. At the time of the filing of the present application, it would have been obvious to a person of ordinary skill in the art to follow a Lissajous scanning pattern, as taught by Artsyukhovich, when scanning an eye according to the method taught by Jia. The motivation for doing so comes from Artsyukhovich, which discloses that its embodiments, “can scan according to a Lissajous pattern faster than saccadic eye movements. As a result, the eye may be considered motionless during Lissajous scanning according to certain embodiments, which may be particularly useful for intra-operative aberrometry integrated with OCT.” (¶ [0034]). Therefore, it would have been obvious to combine Artsyukhovich with Jia to obtain the invention specified in this claim.
With regards to claim 15, the steps performed by the method of this claim are obvious over the combination of Jia and Artsyukhovich for the same reasons as were provided in the discussion of claim 2, which recites an method performing these same steps.
Claims 12 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Jia et al (US PG Pub. No. 2022/0151490) in view of Xu et al (US PG Pub. No. 2020/0273214).
With regards to claim 12, Jia discloses training a machine learning system at: ¶ [0069](“The trainer 104 may use the training OCT images 106 and the training avascular maps 108 to train a neural network 110. The neural network 110 may be trained to accept an OCT image as an input and output an avascular map…”); but, Jia does not specify using ground truth training data comprising results from a Monte Carlo simulation. However, this limitation was known in the art:
Xu discloses ground truth training data for the at least one machine learning system comprises results from a Monte Carlo simulation at ¶ [0052]. At the time of the filing of the present application, it would have been obvious to a person of ordinary skill in the art to use results from a Monte Carlo simulation as ground truth, as taught by Xu, when training a machine learning model, as taught by Jia. The motivation for doing so comes from Xu, which discloses, “Monte Carlo simulation, which can be highly accurate if the simulated samples are sufficiently large…” (¶ [0002]). Therefore, it would have been obvious to combine Xu with Jia to obtain the invention specified in this claim.
With regards to claim 17, the steps performed by the method of this claim are obvious over the combination of Jia and Xu for the same reasons as were provided in the discussion of claim 12, which recites an method performing these same steps.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID F DUNPHY whose telephone number is (571)270-1230. The examiner can normally be reached 9 am - 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID F DUNPHY/Primary Examiner, Art Unit 2673