DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-15 have been examined in this application. This communication is the first action on the merits.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA , the applicant, regards as the invention.
Claims 1-14 recite “(F)” throughout the claims, however (F) is used to refer to, at least: a reference eyeglasses frame (F), a plurality of model eyeglasses frames (F), model eyeglasses frames (F), and a picture of the selected model eyeglasses frames (F). Thus, it is unclear what feature specifically (F) is intended to refer to. Consequently, one of ordinary skill in the art cannot determine how to avoid infringement of these claims because the metes and bounds of these claims are unclear.
Claim 15 recites “System (101) comprising the device (102) according to claim 14 and a 3D scanner or a measuring arm.” As recited, it is unclear whether this claim is intended to be an independent system claim or a dependent claim depending on claim 14. Further, it is unclear whether the system is intended to comprise: 1) the device/3D scanner combined or only a measuring arm, or 2) the device 3D in addition to either the 3D scanner or a measuring arm. Consequently, one of ordinary skill in the art cannot determine how to avoid infringement of these claims because the metes and bounds of these claims are unclear. For examination purposes, in light of claim 14, the Examiner has interpreted claim 15 as: System (101) comprising a device for determining a similarity score between a selected eyeglasses frame (F) and a plurality of model eyeglasses frames (F), the device comprising a memory (102-a) and a processor (102-b), the device (102) being arranged to execute a method for determining a similarity score between a selected eyeglasses frame (F) and a plurality of model eyeglasses frames (F), the method comprising - a step of generating (201) a picture using values of a first subset of physical parameters of the reference eyeglasses frame (F),- a step of selecting (202) at least one of the model eyeglasses frames (F), based on values of a second subset of the physical parameters and by comparison of the values of the second subset of the physical parameters of the reference eyeglasses frame (F) with the values of the second subset of the physical parameters of each of the model eyeglasses frames (F), - a step of determining (203) a similarity score for each of the selected model eyeglasses frames (F), using a convolutional neural network, by comparing the picture of the eyeglasses frame (F) with a picture of the selected model eyeglasses frames (F).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Step 1. When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter.
Step 2A – Prong One. If the claims fall within one of the statutory categories, it must then be determined whether the claims recite an abstract idea, law of nature, or natural phenomenon.
Step 2A – Prong Two. If the claims recite an abstract idea, law of nature, or natural phenomenon, it must then be determined whether the claims recite additional elements that integrate the judicial exception into a practical application. If the claims do not recite additional elements that integrate the judicial exception into a practical application, then the claims are directed to a judicial exception.
Step 2B. If the claims are directed to a judicial exception, it must be evaluated whether the claims recite additional elements that amount to an inventive concept (i.e. “significantly more”) than the recited judicial exception.
In the instant case, claims 1-13 are directed to a process; claims 14 and 15 are directed to a machine.
A claim “recites” an abstract idea if there are identifiable limitations that fall within at least one of the groupings of abstract ideas enumerated in MPEP 2106. In the instant case, claim 1, and similarly claims 14 and 15, recites:
a step of generating (201) a picture using values of a first subset of physical parameters of the reference eyeglasses frame (F), a step of selecting (202) at least one of the model eyeglasses frames (F), based on values of a second subset of the physical parameters and by comparison of the values of the second subset of the physical parameters of the reference eyeglasses frame (F) with the values of the second subset of the physical parameters of each of the model eyeglasses frames (F), a step of determining (203) a similarity score for each of the selected model eyeglasses frames (F) by comparing the picture of the eyeglasses frame (F) with a picture of the selected model eyeglasses frames (F) -- these steps set forth mental processes, particularly concepts performed in the human mind or by a human using a pen and paper, including, inter alia, the observation and evaluation of information.
Further, the limitations of the claims are not indicative of integration into a practical application. Taking the independent claim elements separately, the additional elements of performing the steps using a convolutional neural network and a device comprising a memory and a processor -- merely implement the abstract idea on a computer environment. Additionally, taking the dependent claim elements separately, the additional elements of performing the steps using a 3D scanner or a measuring arm and with a Siamese neural network comprising two identical neural networks, each neural network comprising:- a first convolutional layer (C1) connected to - a first max-pooling layer (M2) connected to - a second convolutional layer (C3) connected to - a second max-pooling layer (M4) connected to - a third convolutional layer (C5) connected to - a third max-pooling layer (M6) connected to - a fourth convolutional layer (C7) connected to - a flatten layer (F8) connected to - a dense layer (D9) also merely implement the abstract idea on a computer environment. Considered in combination, the steps of Applicant’s method add nothing that is not already present when the steps are considered separately.
Thus, claims 1-24 are directed to an abstract idea.
Regarding the independent claims, the technical elements of performing the steps performing the steps using a device comprising a memory and a processor -- merely implement the abstract idea on a computer environment. Additionally, regarding the dependent claims, the technical elements of performing the steps using a 3D scanner or a measuring arm also merely implement the abstract idea on a computer environment. While the claims recite a convolutional neural network being a Siamese neural network, the Siamese neural network comprising two identical neural networks, each neural network comprising:- a first convolutional layer (C1) connected to - a first max-pooling layer (M2) connected to - a second convolutional layer (C3) connected to - a second max-pooling layer (M4) connected to - a third convolutional layer (C5) connected to - a third max-pooling layer (M6) connected to - a fourth convolutional layer (C7) connected to - a flatten layer (F8) connected to - a dense layer (D9) -- these limitations are recited at a high level of generality, as this sequence is the core of many classic CNN models, and thus does not amount to significantly more. Considered in combination, the steps of Applicant’s method add nothing that is not already present when the steps are considered separately.
When considering the elements and combinations of elements, the claim(s) as a whole, do not amount to significantly more than the abstract idea itself. This is because the claims do not amount to an improvement to another technology or technical field; the claims do not amount to an improvement to the functioning of a computer itself; the claims do not move beyond a general link of the use of an abstract idea to a particular technological environment; the claims merely amounts to the application or instructions to apply the abstract idea on a computer; or the claims amounts to nothing more than requiring a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry.
The analysis above applies to all statutory categories of invention. Accordingly, claims 1-15 are rejected as ineligible for patenting under 35 USC 101 based upon the same rationale.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-10 and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Venturini (US PGP 2023/0055013), claiming priority from Provisional Application 63236082, filed 08/23/2021, in view of Koch ("Siamese neural networks for one-shot image recognition." ICML deep learning workshop. Vol. 2. No. 1. 2015.)
As per claim 1, Venturini teaches Computer implemented method for determining a similarity score between a reference eyeglasses frame (F) and a plurality of model eyeglasses frames (F), the method comprising
- a step of generating (201) a picture using values of a first subset of physical parameters of the reference eyeglasses frame (F), (Venturini: Figs. 1A-1C; [0028]-[0038](Exemplary first enrollment image 102 comprises a human subject 104 wearing a first pair of eyeglasses 106. Exemplary first enrollment image 102 may be captured, e.g., by an image capture device (e.g., webcam, camera, or the like) of an electronic device at any point in time, such as when a user is enrolling in a new system, app, game, or online community that utilizes 3D avatars, or when the user desires to update their avatar for a respective system, app, game, or online community. Alternately, a previously-captured image of the user may be used, e.g., from a stored photo library. In the example of system 100, the objects of interest are eyeglasses frames, and an object detection algorithm in this example has produced a masked region 108, wherein the light-colored pixels in masked region 108 represent pixels determined by the algorithm to be part of a frame of a first pair of eyeglasses identified in the enrollment image 102. In other embodiments, of course, masked regions may be produced indicating the pixels in the captured image that are determined to be a part of whatever type of object is of interest to the given application (e.g., an object other than a pair of eyeglasses, such as a cup or mug). In this example 100, a front facing view of the pair of eyeglasses is expected, but other views of the object of interest may also be acceptable, as long as the 3D model objects it is compared against are oriented into a similar view, such that a valid similarity score may be determined for the 3D model object. Once the pixels determined to be part of a frame of the first pair of eyeglasses are identified in masked region 108, a first outline 110 may be generated for the first pair of eyeglasses. In some embodiments, it may be desirable to create a simple outline of an outer edge of the object of interest, e.g., with the outline comprising an ordered list of key points appearing on a grid. extracting a set of edge pixels from the first masked region 108 (e.g., using any desired edge detection algorithm, e.g., Sobel edge detection); placing the set of edge pixels over a grid comprising a plurality of cells; determining a center of mass of edge pixels located within each cell of the grid (wherein the density of the cells in the grid with respect to the size of the mask region representing the first pair of eyeglasses may be customized, e.g., based on how fine or coarse of an object outline is needed for a given implementation); and then determining an ordered list of key points (112), wherein the key points comprise the centers of mass of the cells of the grid containing edge pixels representing an outer edge of the frame of the first pair of eyeglasses. In some embodiments, determining an outer edge of the object of interest (i.e., as opposed to an inner edge) may be an important consideration, e.g., such that only the silhouette of the object of interest is matched against the silhouettes of objects in the 3D model object library, while inner edges (e.g., the inner edges 114 identified around the lenses of the eyeglass frames in masked region 108) may be ignored.)
- a step of selecting (202) at least one of the model eyeglasses frames (F), based on values of a second subset of the physical parameters and by comparison of the values of the second subset of the physical parameters of the reference eyeglasses frame (F) with the values of the second subset of the physical parameters of each of the model eyeglasses frames (F), (Venturini: Figs. 1A-1C; [0030]-[0031] (As will be discussed below, creating an ordered list of key points for the outlines of the two aligned shapes (i.e., an outline of the detected object of interest in the image and an outline of a candidate matching 3D model of the object of interest from an object library), wherein, e.g., the outlines may be in the form of Catmull-Rom splines, other types of curves, connected line segments, or the like, may allow for an easier computation of the amount of area between the two outlines, with a smaller area between two outlines indicating a more precise matching in outline shape between two objects. For example, in this case, the 2D representation of the object of interest in the enrollment image will be compared to a specified view of a respective 3D model of a variant of the object of interest in a 3D model object library.); [0039]-[0042] (Next, a second outline 126 may be created for the first model view 124, e.g., following an outline generation process similar to that described above with regard to the generation of the first outline 110 for the masked region 108. In particular, for each 3D model of pairs of eyeglasses in the first set, a second outline may be generated for the respective 3D mode by: extracting a set of edge pixels from the respective 3D model, as oriented in a specified view; placing the set of edge pixels over a grid comprising a plurality of cells; determining a center of mass of edge pixels located within each cell of the grid; and then determining an ordered list of key points, wherein the key points comprise the centers of mass of cells of the grid containing edge pixels representing an outer edge of the frame of the respective 3D model of pair of eyeglasses. Once a first outline has been determined for the first pair of eyeglasses and second outline has been determined for each 3D model of pair of eyeglasses in the first set, at block 128, each respective second outline may be aligned with the first outline so that they may be compared to one another and a similarity score may be determined for the respective 3D model from the first set.); Fig. 2
- a step of determining (203) a similarity score for each of the selected model eyeglasses frames (F), . . . ., by comparing the picture of the eyeglasses frame (F) with a picture of the selected model eyeglasses frames (F). (Venturini: Figs. 1A-1C; [0041]-[0042] (Once a first outline has been determined for the first pair of eyeglasses and second outline has been determined for each 3D model of pair of eyeglasses in the first set, at block 128, each respective second outline may be aligned with the first outline so that they may be compared to one another and a similarity score may be determined for the respective 3D model from the first set. In some cases, aligning the first outline and a respective second outline may comprise: aligning the key points comprising the first outline and the second outline for the respective 3D model of pairs of eyeglasses in the first set. The alignment operation may involve translating, rotating, and/or scaling the first outline, as necessary, so that a valid comparison to a respective second outline may be made. In still other cases, an Iterative Closest Point (ICP)-style algorithm may be employed to quickly align the two outline shapes. Next, as shown at block 130, the aligned first and second outlines may be compared to one another and a similarity score may be determined at block 132.); Fig. 2;
Venturini does not explicitly disclose the following known technique which is taught by Koch:
. . . using a convolutional neural network . . . (Koch: Page 2 (employ large siamese convolutional neural networks which a) are capable of learning generic image features)
This known technique is applicable to the method of Venturini as they both share characteristics and capabilities, namely, they are directed to image processing.
One of ordinary skill in the art at the time of filing would have recognized that applying the known technique of Koch would have yielded predictable results and resulted in an improved method. It would have been recognized that applying the technique of Koch to the teachings of Venturini would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such CNN features into similar methods. Further, applying the using a convolutional neural network to the teachings of Venturini would have been recognized by those of ordinary skill in the art as resulting in an improved method that would achieve
strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks. (Koch: Page 1).
As per claim 2, Venturini/Koch teach the physical parameters comprising:
- at least one point of a border of a right rim (R1) of the eyeglasses frame (F), (Venturini: Figs. 1A-1C; [0028]-[0038])
- at least one point of a border of a left rim (R2) of the eyeglasses frame (F), and (Venturini: Figs. 1A-1C; [0028]-[0038])
- a distance separating a centre of the right rim (R1) and a centre of the left rim R2). (Venturini: Figs. 1A-1C; [0028]-[0038])
As per claim 3, Venturini/Koch teach the at least one point of the border of the right rim (R1) and the at least one point of the border of the left rim (R2) being obtained using a 3D scanner or a measuring arm. (Venturini: [0023]; Figs. 1A-1C; [0028]-[0038])
As per claim 4, Venturini/Koch teach the step of generating (201) the picture comprising:
- a step of generating a first closed line (301) representing the right rim (R1) using the at least one point of a border of the right rim (R1), (Venturini: Figs. 1A-1C; [0028]-[0038])
- a step of generating a second closed line (302) representing the left rim (R2) using the at least one point of a border of the left rim (R2), (Venturini: Figs. 1A-1C; [0028]-[0038])
a longitudinal axis of the first closed line being identical to a longitudinal axis of the second closed line, (Venturini: Figs. 1A-1C; [0028]-[0038] (disclosing edge pixels over grid as longitudinal axis)
a centre of the first closed line being at a distance of a centre of the second closed line equal to the distance separating the centre of the right rim (R1) and the centre of the left rim (R2),
the step of generating (201) the picture also comprising a step of generating a straight line segment (303) between a first point of the first closed line and a second point of the second closed line, the first point and the second point being the closest points. (Venturini: Figs. 1A-1C; [0028]-[0038])
As per claim 5, Venturini/Koch teach the step of generating the picture (201) also comprising,
- a step of colouring the first closed line based on a distance between at least one point of the first closed line and a front part of the reference eyeglasses frame (F) and/or (Venturini: [0024]; Figs. 1A-1C; [0028]-[0038]; Fig. 2; [0042]-[0043])
- a step of colouring the second closed line based on a distance between at least one point of the second closed line and the front part. (Venturini: [0024]; Figs. 1A-1C; [0028]-[0038]; Fig. 2; [0042]-[0043])
As per claim 6, Venturini/Koch teach the step of colouring the first closed line comprising:
- a step of determining the distance between the at least one point of the first closed line and the front part, (Venturini: Figs. 1A-1C; [0028]-[0038])
- a step of selecting a colour of a colour set, each colour of the colour set being associated with a distance, (Venturini: [0024]; Figs. 1A-1C; [0028]-[0038]; Fig. 2; [0042]-[0043])
- a step of applying the colour in a part of the first closed line in the vicinity of the at least one point of the first closed line and/or the step of colouring the second closed line comprising: (Venturini: [0024]; Figs. 1A-1C; [0028]-[0038]; Fig. 2; [0042]-[0043])
- a step of determining the distance between the at least one point of the second closed line and the front part, (Venturini: Figs. 1A-1C; [0028]-[0038])
- a step of selecting a colour of the colour set, (Venturini: [0024]; Figs. 1A-1C; [0028]-[0038]; Fig. 2; [0042]-[0043])
- a step of applying the colour in a part of the second closed line in the vicinity of the at least one point of the second closed line. (Venturini: [0024]; Figs. 1A-1C; [0028]-[0038]; Fig. 2; [0042]-[0043])
As per claim 7, Venturini/Koch teach the colour set comprising shades of grey. (Venturini: [0024]; Fig. 2)
As per claim 8, Venturini/Koch teach the method also comprising, when a length of the picture is bigger than a first threshold or when a width of the picture is bigger than a second threshold, a step of reducing the size of the picture. (Venturini: Figs. 1A-1C; [0031]-[0038]) [0039]; [0054])
As per claim 9, Venturini/Koch teach the convolutional neural network being a Siamese neural network. (Koch: Page 2)
As per claim 10, Venturini/Koch teach, the Siamese neural network comprising two identical neural networks and a cost module, each neural network comprising:
- a first convolutional layer (C1) connected to
- a first max-pooling layer (M2) connected to
- a second convolutional layer (C3) connected to
- a second max-pooling layer (M4) connected to
- a third convolutional layer (C5) connected to
- a third max-pooling layer (M6) connected to
- a fourth convolutional layer (C7) connected to
- a flatten layer (F8) connected to - a dense layer (D9). (Koch: Pages 2 and 3).
As per claim 13, Venturini/Koch teach also comprising:
- a step of displaying the selected model eyeglasses frames (F). (Venturini: [0048]; Fig. 2; [0041]-[0042])
As per claims 14 and 15, these claims recite limitations substantially similar to claim 1 and are therefore rejected in the same manner as this claim, as set forth above.
Claims 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Venturini/Koch in view of Fonte (US PGP 2015/0154678).
As per claim 11, Venturini/Koch teach also comprising:
- a step of selecting (401) among the selected model eyeglasses frames (F), the one having the highest similarity score, (Venturini: [0048]; [0041]-[0042])
- a step of manufacturing (402) a lens based on the selected model eyeglasses frame (F). (Fonte: [0107]-[0108] (As illustrated at 18, the computer system provides instructions to a manufacturing system, which produces the one-up custom product.)
This known technique is applicable to the method of Venturini/Koch as they both share characteristics and capabilities, namely, they are directed to generating eyewear.
One of ordinary skill in the art at the time of filing would have recognized that applying the known technique of Fonte would have yielded predictable results and resulted in an improved method. It would have been recognized that applying the technique of Fonte to the teachings of Venturini/Koch would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such manufacturing features into similar methods. Further, applying the manufacturing (402) a lens based on the selected model eyeglasses frame (F).to the teachings of Venturini/Koch would have been recognized by those of ordinary skill in the art as resulting in an improved method that would provide shopping experiences that enables a unique made-to-order product and an easier and more custom experience to creating and purchasing the perfect product for the individual, in this case a pair of glasses.. (Fonte: [0003]-[0008).
As per claim 13, Venturini/Koch/Fonte teach the step of manufacturing (402) the lens comprising:
- a step of acquiring physical data of the selected model eyeglasses frame (F) and (Venturini: Figs. 1A-1C; [0028]-[0038])
- a step of determining manufacturing data for fitting a lens into said selected model eyeglasses frame (F). (Fonte: [0099] (The output of the computer system 14 is therefore provided to preview system 15 in which the computer system creates previews of custom products and the user. Then, as illustrated at 17, the computer system prepares product models and information for manufacturing the selected one-up, fully-custom product.); [0107] (As illustrated at 17, the computer system prepares the custom product approved by the user for manufacturing. Preparation may involve converting the custom product model and user preferences to a set of specifications, instructions, data-structures, computer-numerical-control instructions, 2D or 3D model files that can be interpreted by manufacturing systems, etc. Preparation may also include custom computer-controlled instructions for guiding machinery or people through each step of the manufacturing process.)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNIFER V LEE whose telephone number is (571)272-4778. The examiner can normally be reached Monday - Friday 9AM - 5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JEFFREY A. SMITH can be reached at (571)272-6763. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JENNIFER V LEE/Examiner, Art Unit 3688
/Jeffrey A. Smith/Supervisory Patent Examiner, Art Unit 3688