DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 to 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
In accordance with MPEP 2106.04, each of Claims 1 to 20 has been analyzed to determine whether it is directed to any judicial exceptions.
Step 2A, Prong 1 per MPEP 2106.04(a)
Each of Claims 1 to 20 recites at least one step or instruction for processing collected data for display, which is grouped as a mental process in MPEP 2106.04(a)(2)(III) or a mathematical concept in MPEP 2106.04(a)(2)(I).
Specifically, Claim 1 recites a method for providing alignment guidance during ocular surgery comprising:
(a) receiving, by a computing device, from an imaging device, an input image of a patient's eye having a toric intraocular lens (IOL) within the patient's eye (additional element);
(b) obtaining, by the computing device, a reference axis for the patient's eye, the reference axis indicating a desired orientation of a toric IOL axis of the toric IOL (observation, which is grouped as a mental process in MPEP 2106.04(a)(2)(III));
(c) processing, by the computing device, the input image to obtain a feature label indicating locations of features of the toric IOL represented in the input image, the features including any of: alignment dots defined on the toric IOL, a perimeter of the toric IOL, and portions of haptics of the toric IOL (evaluation or judgement, which is grouped as a mental process in MPEP 2106.04(a)(2)(III));
(d) processing, by the computing device, the feature label to determine an orientation of the toric IOL axis (evaluation or judgement, which is grouped as a mental process in MPEP 2106.04(a)(2)(III));
(e) calculating, by the computing device, an angle difference between the toric IOL axis and the reference axis (involves a mathematical concept in MPEP 2106.04(a)(2)(I) and/or a evaluation or judgement, which is grouped as a mental process in MPEP 2106.04(a)(2)(III));
(f) generating, by the computing device, an output image including at least one indicator corresponding to the angle difference (additional element); and
(g) outputting the output image to a display device (additional element).
Additionally, Claim 14 recites a system for providing alignment guidance during ocular surgery, the system comprising:
an imaging device (additional element);
a display device (additional element);
a computing device comprising one or more processing devices and one or more memory devices storing executable code that, when executed by the one or more processing devices, further cause the one or more processing devices to (additional element):
(a) receive from the imaging device, an input image of a patient's eye having a toric intraocular lens (IOL) within the patient's eye (additional element);
(b) obtain a reference axis for the patient's eye, the reference axis indicating a desired orientation of a toric IOL axis of the toric IOL (observation, which is grouped as a mental process in MPEP 2106.04(a)(2)(III));
(c) process the input image using a machine learning model (additional element) to obtain a feature label indicating locations of features of the toric IOL represented in the input image, the features including any of: alignment dots defined on the toric IOL, a perimeter of the toric IOL, and portions of haptics of the toric IOL (evaluation or judgement, which is grouped as a mental process in MPEP 2106.04(a)(2)(III));
(d) process the feature label to determine an orientation of the toric IOL axis (evaluation or judgement, which is grouped as a mental process in MPEP 2106.04(a)(2)(III));
(e) calculate an angle difference between the toric IOL axis and the reference axis (involves a mathematical concept in MPEP 2106.04(a)(2)(I) and/or a evaluation or judgement, which is grouped as a mental process in MPEP 2106.04(a)(2)(III));
(f) generate an output image including at least one indicator corresponding to the angle difference (additional element); and
(g) output the output image to the display device (additional element).
Further, dependent Claims 2 to 13 and 15 to 20 merely include limitations that either further define the abstract idea (and thus don’t make the abstract idea any less abstract), represent insignificant extra-solution activity or amount to no more than generally linking the use of the abstract idea to a particular technological environment or field of use because they’re merely incidental or token additions to the claims that do not alter or affect how the claimed functions/steps are performed. For example, the machine learning model of dependent Claims 6, 7, 18, 19 and 20 does not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer in accordance with MPEP 2106.05(f).
Accordingly, as indicated above, each of the above-identified claims recites an abstract idea as in MPEP 2106.04(a).
Step 2A, Prong 2 per MPEP 2106.04(d)
The above-identified abstract idea in each of independent Claims 1 and 14 (and their respective dependent Claims 2 to 13 and 15-20) is not integrated into a practical application under MPEP 2106.04(d) because the additional elements (identified above in independent Claims 1 and 14), either alone or in combination, generally link the use of the above-identified abstract idea to a particular technological environment or field of use according to MPEP 2106.05(h) or represent insignificant extra-solution activity according to MPEP 2106.05(g). More specifically, the additional elements of independent Claim 1 (and its respective dependent claims) include: a computing device; receiving, by a computing device, from an imaging device, an input image of a patient's eye having a toric intraocular lens (IOL) within the patient's eye; generate an output image including at least one indicator corresponding to the angle difference; and output the output image to the display device. The additional elements of independent Claim 14 (and its respective dependent claims) include: an imaging device, a display device and a computing device comprising one or more processing devices and one or more memory devices storing executable code (e.g., a machine learning model) that, when executed by the one or more processing devices to receive from the imaging device, an input image of a patient's eye having a toric intraocular lens (IOL) within the patient's eye, generate an output image including at least one indicator corresponding to the angle difference, and output the output image to the display device.
The imaging device, display device and computing device (e.g., processor, memory and/or machine learning model) are generically recited computer elements in independent Claims 1 and 14 (and their respective dependent claims) which do not improve the functioning of a computer, or any other technology or technical field according to MPEP 2106.04(d)(1) and 2106.05(a). Nor do these above-identified additional elements serve to apply the above-identified abstract idea with, or by use of, a particular machine according to MPEP 2106.05(b), effect a transformation according to MPEP 2106.05(c), provide a particular treatment or prophylaxis according to MPEP 2106.04(d)(2) or apply or use the above-identified abstract idea in some other meaningful way beyond generally linking the use thereof to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception according to MPEP 2106.04(d)(2) and 2106.05(e). Furthermore, the above-identified additional elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer in accordance with MPEP 2106.05(f). For at least these reasons, the abstract idea identified above in independent Claims 1 and 14 (and their respective dependent claims) is not integrated into a practical application in accordance with MPEP 2106.04(d).
Moreover, the above-identified abstract idea is not integrated into a practical application in accordance with MPEP 2106.04(d) because the claimed method and system merely implements the above-identified abstract idea (e.g., mental process and certain method of organizing human activity) using rules (e.g., computer instructions) executed by a computer (e.g., computing device as claimed). In other words, these claims are merely directed to an abstract idea with additional generic computer elements which do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer according to MPEP 2106.05(f). Additionally, Applicant’s specification does not include any discussion of how the claimed invention provides a technical improvement realized by these claims over the prior art or any explanation of a technical problem having an unconventional technical solution that is expressed in these claims according to MPEP 2106.05(a). That is, like Affinity Labs of Tex. v. DirecTV, LLC, the specification fails to provide sufficient details regarding the manner in which the claimed invention accomplishes any technical improvement or solution. Thus, for these additional reasons, the abstract idea identified above in independent Claims 1 and 14 (and their respective dependent claims) is not integrated into a practical application under MPEP 2106.04(d)(I).
Accordingly, independent Claims 1 and 14 (and their respective dependent claims) are each directed to an abstract idea according to MPEP 2106.04(d).
Step 2B per MPEP 2106.05
None of Claims 1 to 20 include additional elements that are sufficient to amount to significantly more than the abstract idea in accordance with MPEP 2106.05 for at least the following reasons.
Independent Claim 1 (and its respective dependent claims) require the additional elements of: a computing device; receiving, by a computing device, from an imaging device, an input image of a patient's eye having a toric intraocular lens (IOL) within the patient's eye; generate an output image including at least one indicator corresponding to the angle difference; and output the output image to the display device. Further, independent Claim 14 (and its respective dependent claims) require the additional elements of an imaging device, a display device and a computing device comprising one or more processing devices and one or more memory devices storing executable code that, when executed by the one or more processing devices to receive from the imaging device, an input image of a patient's eye having a toric intraocular lens (IOL) within the patient's eye, generate an output image including at least one indicator corresponding to the angle difference, and output the output image to the display device.
The additional elements of an imaging device, a display device and a computing device (e.g., processor, memory and/or machine learning model) are generically claimed computer components which enable the above-identified abstract idea(s) to be conducted by performing the basic functions of automating mental tasks. The courts have recognized such computer functions as well understood, routine, and conventional functions when claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. See, MPEP 2106.05(d)(II) along with Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); and OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93.
Per Applicant’s specification, the imaging device 302 may be embodied as a digital three-dimensional microscope, such as the ALCON NGENUITY 1.5 (a digital three-dimensional digital marker microscope (DMM) with integrated image guidance) or ZEISS ARTEVO, or as an analog microscope with image guidance, such as the ALCON VERION DIGITAL MARKER or ZEISS CALISTO (paragraph 24). As known in the art, an imaging device 302 may be programmed with a treatment plan including pre-operative images of the patient's eye 100, which may include images of the sclera 104, iris 106 and possibly a portion of the retina 108 (paragraph 24). Additionally, the components of the system 300a in addition to the imaging device may be implemented using the computing capabilities of the imaging device 302 embodied as a digital microscope (paragraph 25). Alternatively, the additional components of the system 300a may be implemented by a separate computing device that receives images labeled with the reference axis from the imaging device 302. (paragraph 25). Further, the post processor 306 may determine the IOL axis from the label and determine an angular difference between the IOL axis and the reference axis (paragraph 30). The post processor 306 may generate an output image based on the image received from the imaging device 302 that has information superimposed thereon, such as lines representing the reference axis and the IOL axis and arrows, text, or other information indicating an amount of rotation needed to align the IOL axis with the reference axis (paragraph 30). Finally, the display device 308 may be implemented as a monitor in a room in which the implantation of the toric IOL 200 is being performed, a heads- up display worn by a surgeon performing the implantation, or another display device (paragraph 31).
Accordingly, in light of Applicant’s specification, the claimed terms imaging device, display device and computing device (e.g., processor, memory and/or machine learning model) are reasonably construed as a generic computing devices. Like SAP America vs Investpic, LLC (Federal Circuit 2018), it is clear, from the claims themselves and the specification, that these limitations require no improved computer resources, just already available technology, with their already available basic functions, to use as tools in executing the claimed process. See MPEP 2106.05(f).
Furthermore, Applicant’s specification does not describe any special programming or algorithms required for the imaging device, display device or computing device (e.g., processor, memory and/or machine learning model). This lack of disclosure is acceptable under 35 U.S.C. §112(a) since this hardware performs non-specialized functions known by those of ordinary skill in the computer arts. By omitting any specialized programming or algorithms, Applicant's specification essentially admits that this hardware is conventional and performs well understood, routine and conventional activities in the computer industry or arts. For example, paragraph 24 of Applicant’s specification discloses the imaging device to be commercially available digital microscopes. In other words, Applicant’s specification demonstrates the well-understood, routine, conventional nature of the above-identified additional elements because it describes these additional elements in a manner that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. § 112(a) (see MPEP 2106.05(d)(I)(2) and 2106.07(a)(III)). Adding hardware that performs “‘well understood, routine, conventional activit[ies]’ previously known to the industry” will not make claims patent-eligible (TLI Communications along with MPEP 2106.05(d)(I)).
The recitation of the above-identified additional limitations in Claims 1 to 20 amounts to mere instructions to implement the abstract idea on a computer. Simply using a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not provide significantly more. See MPEP 2106.05(f) along with Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); and TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Moreover, implementing an abstract idea on a generic computer, does not add significantly more, similar to how the recitation of the computer in the claim in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer.
A claim that purports to improve computer capabilities or to improve an existing technology may provide significantly more. See MPEP 2106.05(a) along with McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); and Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). However, a technical explanation as to how to implement the invention should be present in the specification for any assertion that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes. That is, per MPEP 2106.05(a), the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. Here, Applicant’s specification does not include any discussion of how the claimed invention provides a technical improvement realized by these claims over the prior art or any explanation of a technical problem having an unconventional technical solution that is expressed in these claims. Instead, as in Affinity Labs of Tex. v. DirecTV, LLC 838 F.3d 1253, 1263-64, 120 USPQ2d 1201, 1207-08 (Fed. Cir. 2016), the specification fails to provide sufficient details regarding the manner in which the claimed invention accomplishes any technical improvement or solution.
For at least the above reasons, the method and systems of Claims 1 to 20 are directed to applying an abstract idea as identified above on a general purpose computer without (i) improving the performance of the computer itself or providing a technical solution to a problem in a technical field according to MPEP 2106.05(a), or (ii) providing meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that these claims amount to significantly more than the abstract idea itself according to MPEP 2106.04(d)(2) and 2106.05(e).
Taking the additional elements individually and in combination, the additional elements do not provide significantly more. Specifically, when viewed individually, the above-identified additional elements in independent Claims 1 and 14 (and their dependent claims) do not add significantly more because they are simply an attempt to limit the abstract idea to a particular technological environment according to MPEP 2106.05(h). When viewed as a combination, these above-identified additional elements simply instruct the practitioner to implement the claimed functions with well-understood, routine and conventional activity specified at a high level of generality in a particular technological environment according to MPEP 2106.05(h). When viewed as whole, the above-identified additional elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself according to MPEP 2106.04(d)(2) and 2106.05(e). Moreover, neither the general computer elements nor any other additional element adds meaningful limitations to the abstract idea because these additional elements represent insignificant extra-solution activity according to MPEP 2106.05(g). As such, there is no inventive concept sufficient to transform the claimed subject matter into a patent-eligible application as required by MPEP 2106.05.
Therefore, for at least the above reasons, none of the Claims 1 to 20 amounts to significantly more than the abstract idea itself. Accordingly, Claims 1 to 20 are not patent eligible and rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-6, 9, and 13 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shimizu et al. US 20110292340 A1, herein referred to as Shizumi.
Re. Claim 1, Shizumi discloses a method for providing alignment guidance during ocular surgery comprising (Fig 1, ophthalmic apparatus 100, abstract):
(a) receiving, by a computing device (Fig 1, control unit 10 [0027]), from an imaging device (Fig 1, camera 20), an input image of a patient's eye having a toric intraocular lens (IOL) ([0023-0024]) within the patient's eye ([0027]);
(b) obtaining, by the computing device, a reference axis for the patient's eye ([0029-0030]), the reference axis indicating a desired orientation of a toric IOL axis of the toric IOL ([0038]);
(c) processing, by the computing device, the input image ([0027]) to obtain a feature label (Fig 3A-C display box 62) indicating locations of features of the toric IOL represented in the input image (Fig 3A-C, image 51), the features including any of: alignment dots (Fig 3A-C, not labeled [0050]R1 (length of the straight line shown in a dotted line) defined on the toric IOL (Fig 3A-C [0050]), a perimeter of the toric IOL (Fig 3A-C [0050]), and portions of haptics of the toric IOL ([0019], anticipates broader structure that can be used a haptic]);
(d) processing, by the computing device, the feature label to determine an orientation of the toric IOL axis ([0035]);
(e) calculating, by the computing device, an angle difference between the toric IOL axis ([0041]) and the reference axis ([0041]);
(f) generating, by the computing device, an output image (Fig 4B, image 70, [0016]) including at least one indicator (Fig 4B, angle 62a) corresponding to the angle difference (Fig 4B, angle 62a); and
(g) outputting the output image to a display device (Fig 4B, monitor 40).
Re. Claim 2, Shizumi discloses further comprising: (h) adjusting, by a surgeon, an orientation of the toric IOL; and (i) repeating (a) through (g) ([0036]).
Re. Claim 3, Shizumi discloses further comprising, following performing (h) and (i): determining, by the computing device (Fig 1, control unit 10), that the angle difference meets a predefined tolerance [0032]; and in response to determining that the angle difference (Fig 4B, angle 62a) meets the predefined tolerance [0035], outputting, by the computing device, on the display device (Fig 4B, monitor 40), an indicator indicating that no further rotation of the toric IOL is required ([0062], Fig 5, confirming theta and d (distance) on display box 82.
Re. Claim 4, Shizumi discloses wherein determining that the angle difference (Fig 4B, angle 62a) meets the predefined tolerance [0032] comprises determining that a refractive error (Abstract; configured to correct astigmatism, a type refractive error) resulting from the angle difference meets the predefined tolerance [0032].
Re. Claim 5, Shizumi discloses wherein (c) further comprises processing the input image ([0027]) to obtain one or more bounding boxes ([0040], dioptic power, axial angle) including the features and using the one or more bounding boxes to obtaining the feature label (Fig 3A-C, display box 62).
Re. Claim 6, Shizumi discloses wherein processing the feature label (Fig 3A-C display box 62) to determine the orientation of the toric IOL axis comprises generating line parameters describing a line passing through the alignment dots ([0040]; Fig 3A-C, not labeled; [0050] R1 (length of the straight line shown in a dotted line).
Re. Claim 9, Shizumi discloses wherein the at least one indicator is one or more digits representing the angle difference (Shizumi Fig 4B, angle 62a).
Re. claim 13, Shizumi discloses further comprising: matching, by the computing device (Shizumi Fig 1, control unit 10 [0027]), ocular anatomy represented in the input image to a treatment plan to determine an orientation of the patient's eye (Shizumi Fig 4B, 53); and determining, by the computing device (Shizumi Fig 1, control unit 10 [0027]), an orientation of the reference axis (Shizumi [0029-0030]) according to the treatment plan and the orientation of the patient's eye (Shizumi [0038] and [0041]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 7, 8, 10, 12, 14, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shizumi in view of Padrick et al.US 20190209242 A1, herein referred to as Padrick.
Re. Claim 7, Shizumi discloses wherein processing the feature label (Fig 3A-C display box 62) to determine the orientation of the toric IOL axis further ([0038]).
Shimizu does not explicitly disclose comprises processing the feature label using a machine learning model. But Padrick discloses a similar IOL alignment method. Padrick teaches wherein processing the feature label ([0055]) using a machine learning model [0021].
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filling date of the claimed invention to modify Shizumi to incorporate wherein processing the feature label using a machine learning model as taught and suggested by Padrick in order to achieve a desired manifest refraction in spherical equivalent IOL implantation [0003].
Re claim 8, the combination of Shizumi and Padrick discloses the invention substantially as claimed and as discussed above with respect to claim 7 teaches wherein the machine learning model (Padrick [0021]) is a logistic regression model (Padrick [0045]).
Re. claim 10, the combination of Shizumi and Padrick discloses the invention substantially as claimed and as discussed above with respect to claim 8 teaches wherein the at least one indicator (Shizumi Fig 4B, angle 62a) is one or more digits (Shizumi does explicitly teach digits for angle output; Padrick [0088], cites an equating, output that would be in terms of digits) representing a refractive error corresponding to the angle difference (Paddock [0089]).
Re. claim 12, the combination of Shizumi and Padrick discloses the invention substantially as claimed and as discussed above with respect to claim 7 teaches wherein the imaging device is a digital microscope (Padrick [0024]).
Re. claim 14, the combination of Shizumi and Padrick discloses the invention substantially as claimed and as discussed above with respect to claim 1 teaches a system for providing alignment guidance during ocular surgery, the system comprising (Shizumi Fig 1, ophthalmic apparatus 100, abstract): an imaging device (Shizumi Fig 1, camera 20); a display device (Shizumi Fig 4B, monitor 40).; a computing device (Shizumi Fig 1, control unit 10 [0027]) comprising one or more processing devices (Shizumi Fig 1, control unit 10 [0027]) and one or more memory devices (Shizumi Fig 1, memory1 1) storing executable code (Shizumi [0028]) that, when executed by the one or more processing devices (Shizumi [0028-0029]), further cause the one or more processing devices to: (a) receive from the imaging device (Shizumi [0028-0029]), an input image (Shizumi Fig 3A-C, image 51) of a patient's eye having a toric intraocular lens (IOL) within the patient's eye (Shizumi Fig 3A-C [0050]); (b) obtain a reference axis for the patient's eye (Shizumi [0029-0030]), the reference axis indicating a desired orientation of a toric IOL axis of the toric IOL (Shizumi [0041]); (c) process the input image using a machine learning model (Padrick [0055] and [0021])to obtain a feature label indicating locations of features of the toric IOL represented in the input image, the features including any of: alignment dots defined on the toric IOL (Shizumi [0040]; Fig 3A-C, not labeled; [0050] R1 (length of the straight line shown in a dotted line), a perimeter of the toric IOL, and portions of haptics of the toric IOL (Shizumi [0019], anticipates broader structure that can be used a haptic]); (d) process the feature label (Shizumi Fig 3A-C display box 62) to determine an orientation of the toric IOL axis (Shizumi [0040]); (e) calculate an angle difference between the toric IOL axis and the reference axis (Shizumi [0041];(f) generate an output image (Shizumi Fig 4B, image 70, [0016]) including at least one indicator corresponding to the angle difference (Shizumi Fig 4B, angle 62a); and (g) output the output image to the display device (Shizumi Fig 4B, monitor 40).
Re. claim 18, the combination of Shizumi and Padrick discloses the invention substantially as claimed and as discussed above with respect to claim 14 teaches wherein the machine learning model (Padrick [0055] and [0021]) is further configured to identify one or more bounding boxes (Padrick [0041] Padrick teaches incorporating hyper parameters in calculations) highlighting the features and use the one or more bounding boxes to obtain the feature label (Padrick [0046] “F model” gives capable feature label output.
Re. claim 19, the combination of Shizumi and Padrick discloses the invention substantially as claimed and as discussed above with respect to claim 14 teaches a wherein: the learning model (Padrick [0055] and [0021]) is a first machine learning model; and the executable code Shizumi [0028]), when executed by the one or more processing devices (Shizumi Fig 1, control unit 10 [0027]), further causes the one or more processing devices to process the feature label (Shizumi Fig 3A-C display box 62 using a second machine learning model (Padrick [0004], one or more machine learning models) to determine an orientation of the toric IOL axis (Shizumi [0040]).
Re. claim 20, the combination of Shizumi and Padrick discloses the invention substantially as claimed and as discussed above with respect to claim 19 teaches a wherein the second machine learning model is a logistic regression model (Padrick [0045]).
Claim(s) 11 and 15-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shizumi in view of Padrick, and further in view of Boukhny et al. US 20090048608 A1, herein referred to as Padrick, as evidenced by, Joshi, Neel et al. “Real-Time Hyperlapse Creation via Optimal Frame Selection.” ACM transactions on graphics 34.4 (2015): 1–9. Web, herein referred to as Joshi.
Re. claim 11, the combination of Shizumi and Padrick discloses the invention substantially as claimed and as discussed above with respect to claim 7 discloses further comprising: receiving, by the computing device, from the imaging device, a video feed comprising a plurality of frames; performing (a) through (c) […] by the computing device (Shizumi Fig 1, control unit 10 [0027], using a tracking algorithm (Padrick [0021] and [0023] teaches using multiple images of patient eye over time), the features for the plurality of frames to obtain a predicted label for each frame of the plurality of frames (Padrick [0023]), the predicted label (Fig 3A-C display box 62) for one or more frames of the plurality of frames including representations of one or more of the features that are not represented in the feature label obtained for the one or more frames of the plurality of frames (Fig 3A-C display box 62). But does not explicitly disclose a video feed comprising a plurality of frame.
But Boukhny discloses a similar toric IOL alignment method. Boukhny teaches where a video feed [0088] comprising a plurality of frames ([0030], real-time video includes plurality of frames; 24-30 frames per second is inherent in the art as evidenced by Joshi (abstract) and tracking [0029].
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filling date of the claimed invention to modify the combination of Shizumi and Padrick to incorporate wherein processing the feature label using a machine learning model as taught and suggested by Boukhny in order to prevent motion artifacts [0026].
Re. Claim 15, the combination of Shizumi, Padricks and Boukhny discloses the invention substantially as claimed and as discussed above with respect to claim 11 teaches wherein the executable code (Shizumi [0028]), when executed by the one or more processing devices (Shizumi Fig 1, control unit 10 [0027]), further causes the one or more processing devices to: receive a video feed (Boukhny [0088]) from the imaging device (Shizumi [0028-0029]), the video feed comprising a plurality of frames (Boukhny [0030], real-time video includes plurality of frames; 24-30 frames per second is inherent in the art as evidenced by Joshi (abstract)); and repeat (a) through (g) periodically using each frame of at least a portion of the plurality of frames as the input image (Boukhny [0029]).
Re. Claim 16, the combination of Shizumi, Padricks and Boukhny discloses the invention substantially as claimed and as discussed above with respect to claim 15 teaches wherein the executable code (Shizumi [0028]), when executed by the one or more processing devices (Shizumi Fig 1, control unit 10 [0027]), further causes the one or more processing devices to: track, using a tracking algorithm (Padrick [0021] and [0023] teaches using multiple images of patient eye over time), the features for the plurality of frames to obtain a predicted label for each frame of the plurality of frames (Padrick [0020-0020], Padrick teaches iterative calculations to optimize final label from multiple images), the predicted label for one or more frames of the plurality of frames including representations of one or more of the features that are not represented in the feature label obtained for the one or more frames of the plurality of frames (Padrick [0021] and [0067] Padrick teaches combing model and iterations as needed for final prediction).
Re. Claim 17, the combination of Shizumi, Padricks and Boukhny discloses the invention substantially as claimed and as discussed above with respect to claim 15 teaches wherein the executable code Shizumi [0028], when executed by the one or more processing devices (Shizumi Fig 1, control unit 10 [0027]), further causes the one or more processing devices to: determine that the angle difference (Shizumi Fig 4B, angle 62a) meets a predefined tolerance (Shizumi [0032]); and in response to determining that the angle difference meets the predefined tolerance, output, on the display device, an indicator indicating that no further rotation of the toric IOL is required (Shizumi [0062], Fig 5, confirming theta and d (distance) on display box 82).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Adrian Flores whose telephone number is (571)272-1450. The examiner can normally be reached M-F, 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Melanie Tyson can be reached at (571) 272-9062. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.F./ Patent Examiner, Art Unit 3774
/THOMAS C BARRETT/ SPE, Art Unit 3799