DETAILED ACTION
This Office Action is responsive to the claims filed on August 22, 2025. Claims 1-18 are under examination. Claims 1, 13, and 15-18 are independent claims.
Claims 1-2 and 4-18 are rejected under 35 USC 112(b).
Claims 1-2, 4-8, and 10-18 are rejected under 35 USC 103 over Kubo.in view of Mullis.
Claim 9 is rejected under 35 USC 103 over Kubo in view of Mullis and Montminy.
Response To Amendments And Arguments
35 USC 112(b)/(a)/(f): The Amendments and Arguments have overcome the existing rejections/interpretations.
35 USC 103: The Applicant’s amendments and arguments have been considered and are persuasive. A new Mullis reference is introduced to teach the features of the amended claims.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-2 and 4-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The independent claims recite, “in which parameters of the first machine learning model are adjusted to permit the estimating of the deterioration in the expected lens performance of the target interchangeable lens based on an operating environment of the target interchangeable lens.” The term “permit” has a significant number of different interpretations, to which a person of ordinary skill in the art could reasonably attribute. For example, the function of the model could reasonably be interpreted to enable a determination/detection of the by changing a flag in the system allowing the estimation based on the output of the model. The output of the model could reasonably be interpreted to be an intermediate value used in such an estimation. The output of the model could also reasonably be interpreted to include the estimation as an output. Because it is unclear which interpretation of the term “permit” is being applied, a person of ordinary skill in the art would not be able to interpret the metes and bounds of the claim.
The dependent claims are rejected at least based on their dependency from the rejected independent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-8 and 10-18: Kubo, Mullis, and Montminy
Claims 1-2,4-8 and 10-18 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over US 2019/0061049 A1 to Kubo (Kubo) in view of US 2018/0040135 A1 to Mullis (Mullis).
Claims 1, 13, and 15
Regarding Claim 1 (and claims 13 and 15), Kubo teaches:
An information processing apparatus comprising: a memory configured to store a program; and a processor communicatively connected to the memory and configured to execute the program to:
(Kubo [0070] “Hereinabove, the functional blocks included in the machine learning device 10 have been described. In order to realize these functional blocks, the machine learning device 10 includes an arithmetic processing device such as a central processing unit (CPU). Moreover, the machine learning device 10 includes an auxiliary storage device such as a hard disk drive (HDD) storing various control programs such as application software and an operating system (OS) and a main storage device such as a random access memory (RAM) for storing data which is temporarily necessary for an arithmetic processing device to execute programs.” [0072] “However, since the machine learning device 10 involves a large amount of arithmetic operations associated with supervised learning, the supervised learning may be processed at a high speed, for example, when a graphics processing unit (GPU) is mounted on a personal computer and the GPU is used for arithmetic processing associated with the supervised learning according to a technique called general-purpose computing on graphics processing units (GPGPU).” – A processing system with multiple cores (CPU and GPU) that can each include multiple units (e.g., processing units, acquisition units, determination units, and any other computing units).)
acquire first information on a usage history of a first interchangeable lens , the first interchangeable lens having at least one optical element, after usage of the first interchangeable lens by a user, the usage history including information on an operating environment of the first interchangeable lens, (Kubo [0025]-[0026] “The machine learning by the machine learning device 10 is performed by supervised learning using training data which uses image data obtained by imaging the focusing lens 21 and data related to the use of the focusing lens 21 as input data and an evaluation value related to the quality judgment of the focusing lens 21 as a label. Here, the data related to the use of the focusing lens 21 includes, for example, data indicating the characteristics of a laser incident on the focusing lens 21 during laser processing performed by the laser machine 20, data indicating the characteristics of a target work radiated with a laser during laser processing, and data indicating the characteristics required for the laser processing.” Also See FIG. 4 (Shown below)– The usage history of focusing lenses is collected to train a machine learning model. [0044]-[0045] “Therefore, it is necessary to perform cleaning of the focusing lens 21 periodically as part of maintenance. However, there are cases in which the spatters adhering to the focusing lens 21 can be removed completely by cleaning and cannot be removed completely. In a case in which spatters cannot be removed completely, it is necessary to judge the quality of the focusing lens 21 in order to determine whether the focusing lens 21 after the cleaning is to be used again. However, as described in Related Art, the quality judgment was conventionally performed on the basis of user's rule of thumb and it was difficult to set a quantitative threshold. Moreover, conventionally, the use of the focusing lens 21 after cleaning has not been sufficiently taken into consideration.” See FIGs. 3A and 3B – The training data includes data from lenses that can be cleaned and reused and lenses that are permanently damaged and need to be replaced (e.g., includes a first lens and a target lens).)
acquire second information on an expected lens performance of the first interchangeable lens or a reference interchangeable lens different from the first interchangeable lens and a target interchangeable lens, prior to the usage of the first interchangeable lens or the target interchangeable lens by the user; and (Kubo [0063] “The learning unit 13 receives a pair of the input data and the label as training data and performs supervised learning using the training data to construct a learning model.” [0021] “FIG. 2 is a vertical cross-sectional view schematically illustrating a configuration of a laser machine according to an embodiment of the present invention. FIG. 3A is a schematic plan view when a focusing lens (with no spatter adhering thereto) according to an embodiment of the present invention is seen in the same axial direction as a laser beam.” [0042] “FIG. 3A illustrates a state in which the focusing lens 21 is not used and no spatter adheres to the focusing lens 21. In this state, focusing of the focusing lens 21 can be performed appropriately and laser processing can be executed appropriately.” – The label data includes labels of lenses before use.)
PNG
media_image1.png
764
1366
media_image1.png
Greyscale
acquire a first machine learning model; and provide the first information on the operating environment of the first interchangeable lens and the second information on the expected lens performance of the first interchangeable lens or the reference interchangeable lens as input data to the first machine learning model and to generate a second machine learning model for estimating a deterioration in an expected lens performance of the target interchangeable lens after the usage of the target interchangeable lens by the user, in which parameters of the first machine learning model are adjusted to permit the estimating of the deterioration in the expected lens performance of the target interchangeable lens based on an operating environment of the target interchangeable lens. (Kubo [0063] “The learning unit 13 receives a pair of the input data and the label as training data and performs supervised learning using the training data to construct a learning model. For example, the learning unit 13 performs supervised learning using a neural network. In this case, the learning unit 13 performs forward propagation in which the pair of the input data and the label included in the training data is input to a neural network formed by combining perceptrons and the weighting factors for the respective perceptrons included in the neural network are changed so that the output of the neural network is the same as the label.” – The first and second data are used with the labels of the quality of the lenses to perform supervised learning, which includes modifying the weights of a model to yield another model. [0027] “Due to this, the constructed learning model is a learning model capable of judging the quality of the optical component by taking the use of the optical component into consideration.” – The trained ML model determines whether there is deterioration of the lens. [0093] “Moreover, one machine learning device 10 may be connected to a plurality of laser machines 20 and a plurality of imaging devices 30. Moreover, one machine learning device 10 may perform learning on the basis of the training data acquired from a plurality of laser machines 20 and a plurality of imaging devices 30. Furthermore, in the above-described embodiments, although one machine learning device 10 is illustrated, a plurality of machine learning devices 10 may be present. That is, the relation between the machine learning device 10 and the laser machine 20 and the imaging device 30 may be one-to-one relation and may be one-to-multiple relation or multiple-to-multiple relation.” [0095] “Moreover, one machine learning device 10 may be connected to a plurality of laser machines 20 and a plurality of imaging devices 30. Moreover, one machine learning device 10 may perform learning on the basis of the training data acquired from a plurality of laser machines 20 and a plurality of imaging devices 30. Furthermore, in the above-described embodiments, although one machine learning device 10 is illustrated, a plurality of machine learning devices 10 may be present. That is, the relation between the machine learning device 10 and the laser machine 20 and the imaging device 30 may be one-to-one relation and may be one-to-multiple relation or multiple-to-multiple relation." As described in Modification 1, when a plurality of machine learning devices 10 is present, a learning model stored in the learning model storage unit 14 of any one of the machine learning devices 10 may be shared between other machine learning devices 10. When the learning model is shared between a plurality of machine learning devices 10, since supervised learning can be performed by the respective machine learning devices 10 in a distributed manner, the efficiency of supervised learning can be improved.” – The trained machine learning models may be shared such that data comes from one or more lens devices for training, and the trained model is used by or further modified based on data from, other of the one or more lens devices.)
Kubo suggests including use data related to the use of a focusing lens as an input data for determining whether the focusing lens is defective (Kubo [0026]-[0027] “In the following description, the data related to the use of the focusing lens 21 will be referred to as “use data”. In this manner, the machine learning device 10 performs supervised learning which uses the use data related to the use of the focusing lens 21 as well as the image data obtained by imaging the focusing lens 21 as part of the input data to construct a learning model.”), but Kubo does not appear to explicitly teach, but Kubo in view of Mullis teaches:
acquire first information on a usage history of a first interchangeable lens attachable to a camera body to capture an image, the first interchangeable lens having at least one optical element, after usage of the first interchangeable lens by a user, the usage history including information on an operating environment of the first interchangeable lens, the information on the operating environment including at least one of a drive or control information, temperature information, humidity information, geographic position information indicating a geographic location of the first interchangeable lens, and information about an external force applied to the first interchangeable lens; (Mullis [0023] “The array camera module 102 includes an array of imaging components 104 formed by a sensor and a lens stack array and the array camera module 102 is configured to communicate with a processor 108.” [0030] “A Calibration process (200) can involve performing a defect detection process (205), a photometric calibration process (210), a geometric calibration process (215), and/or a MTF calibration process (220). One skilled in the art will recognize that a calibration process may include other processes for detecting other types of defects and/or to collect information about other aspects of the array camera and the underlying imaging components if other types of information are needed to perform the image processing algorithms without departing from this invention.” [0030] “Furthermore, calibration process 200 and/or each of the individual calibration/error detection processes included in process 200 may be performed one or more times at each of several predetermined temperatures and the results may be aggregated into calibration information that is indexed by temperature in accordance with a number of embodiments of this invention. In accordance with these embodiments, the temperature of the environment and/or the array camera being tested at the time of an iteration of the process 200 is optionally determined (202). The temperature may be measured by a connected device such as, but not limited to, a dark sensor or may be input by an operator depending on the embodiment. The determined temperature is associated with the information generated during the iteration. Furthermore, process 200 may determine whether additional iterations need to be performed at other temperatures (225). If so, process 200 is repeated after the temperature of a test environment is adjusted. Otherwise process 200 ends.” – Mullis teaches detecting defects in a camera array that includes a lens, and it determines the defects in the camera based on measured temperature data.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the environmental use data for input into a defect detection model for a lens of Kubo by the defect detection of the camera with a lens using temperature data in Mullis because the person of ordinary skill in the art would be motivated by the aim of automated detection of lens defects for an optical lens based on data representing the context in which the focusing lens is used to improve the optical performance, to look to Mullis that detects image defects due to temperature in the environment in such a way as to improve the optical performance of the camera/lens. (Kubo [0008] “Therefore, an object of the present invention is to provide a machine learning device, a machine learning system, and a machine learning method for judging the quality of optical components by taking the use of optical components into consideration.” [0053] “The use data includes, for example, any one or all of the data indicating the characteristics of a laser incident on the focusing lens 21 during laser processing, the data indicating the characteristics of a target work radiated with a laser during laser processing, and the data indicating the characteristics required for laser processing.” ; Mullis [0028] “During a manufacture process and/or periodically during the life of an array camera, a calibration process may be performed to generate or update information that can be utilized to perform the super-resolution processing algorithms. A calibration process to collect the calibration information in accordance with embodiments of this invention is illustrated in FIG. 2.” Abstract “Systems and methods for calibrating an array camera are disclosed. Systems and methods for calibrating an array camera in accordance with embodiments of this invention include the detecting of defects in the imaging components of the array camera and determining whether the detected defects may be tolerated by image processing algorithms. The calibration process also determines translation information between imaging components in the array camera for use in merging the image data from the various imaging components during image processing. Furthermore, the calibration process may include a process to improve photometric uniformity in the imaging components.”)
Regarding claim 13, claim 13 is a method the system of claim 1 is configured to execute, and is, therefore, rejected for the same reasons as claim 1.
Regarding claim 15, claim 15 is a CRM configured to execute the same steps as claim 1, using a memory element of the system of claim 1, and is, therefore, rejected for the same reasons as claim 1.
Claims 2 and 14
PNG
media_image2.png
385
276
media_image2.png
Greyscale
Kubo in view of Mullis teaches the features of claim 1 and further teaches:
The information processing apparatus according to Claim 1, wherein the processor is further configured to execute the program to generate third information on an estimated lens performance of the target interchangeable lens after usage of the target interchangeable lens by the user, based on first information on the operating environment of the target interchangeable lens, second information on the expected lens performance of the target interchangeable lens, and the second machine learning model. (Kubo [0027] “In this manner, the machine learning device 10 performs supervised learning which uses the use data related to the use of the focusing lens 21 as well as the image data obtained by imaging the focusing lens 21 as part of the input data to construct a learning model. Due to this, the constructed learning model is a learning model capable of judging the quality of the optical component by taking the use of the optical component into consideration. See FIG. 6 (right)– The second learning model, which is trained based on the first and second information (e.g., as mapped to Mullis and Kubo in the rejection of the first claim), outputs an estimate of the lens performance. This is just the use of the machine learning model to make an inference. [0093] “Moreover, one machine learning device 10 may be connected to a plurality of laser machines 20 and a plurality of imaging devices 30. Moreover, one machine learning device 10 may perform learning on the basis of the training data acquired from a plurality of laser machines 20 and a plurality of imaging devices 30. Furthermore, in the above-described embodiments, although one machine learning device 10 is illustrated, a plurality of machine learning devices 10 may be present. That is, the relation between the machine learning device 10 and the laser machine 20 and the imaging device 30 may be one-to-one relation and may be one-to-multiple relation or multiple-to-multiple relation.” [0095] “Moreover, one machine learning device 10 may be connected to a plurality of laser machines 20 and a plurality of imaging devices 30. Moreover, one machine learning device 10 may perform learning on the basis of the training data acquired from a plurality of laser machines 20 and a plurality of imaging devices 30. Furthermore, in the above-described embodiments, although one machine learning device 10 is illustrated, a plurality of machine learning devices 10 may be present. That is, the relation between the machine learning device 10 and the laser machine 20 and the imaging device 30 may be one-to-one relation and may be one-to-multiple relation or multiple-to-multiple relation." As described in Modification 1, when a plurality of machine learning devices 10 is present, a learning model stored in the learning model storage unit 14 of any one of the machine learning devices 10 may be shared between other machine learning devices 10. When the learning model is shared between a plurality of machine learning devices 10, since supervised learning can be performed by the respective machine learning devices 10 in a distributed manner, the efficiency of supervised learning can be improved.” – The trained machine learning models may be shared such that data comes from one or more lens devices for training, and the trained model is used by or further modified based on data from, other of the one or more lens devices.)
Regarding claim 14, claim 14 is the method executed by the apparatus of claim 2 and is rejected for at least the same reasons as claim 2.
Claim 4
Kubo in view of Mullis teaches the features of claim 1 and further teaches:
The information processing apparatus according to Claim 1, wherein the second information includes at least one of environmental test information, durability test information, load test information, vibration test information, and impact test information of the first interchangeable lens, the target interchangeable lens, or the reference interchangeable lens. (Kubo [0053] “The use data includes, for example, any one or all of the data indicating the characteristics of a laser incident on the focusing lens 21 during laser processing, the data indicating the characteristics of a target work radiated with a laser during laser processing, and the data indicating the characteristics required for laser processing.” – The environment of the use of the lens is environmental test information that the model use as input.)
Claim 5
Kubo in view of Mullis teaches the features of claim 4 and further teaches:
The information processing apparatus according to Claim 4, wherein the second information includes at least one of optical performance information, operation performance information, dust-proof or drip-proof performance information, evaluation information in an appearance or operation state, and degree of wear information of components of the first interchangeable lens, the target interchangeable lens, or the reference interchangeable lens after a predetermined test has been further performed on the first interchangeable lens, the target interchangeable lens, or the reference interchangeable lens. (Kubo [0061] “The label acquisition unit 12 is a part that acquires the evaluation value from the laser machine 20 as a label and outputs the acquired label to the learning unit 13. Here, the evaluation value in the present embodiment is an evaluation value related to quality judgment and is a value indicating whether the focusing lens 21 can be used as it is (that is, ‘good’) or the focusing lens 21 needs to be replaced (that is, ‘defective’)” [0062] “The label acquisition unit 12 acquires the input evaluation value. Since it is desirable that the evaluation value is accurate, it is desirable that an expert operator makes judgment for determining the evaluation value.”– The label/second information includes optical performance information.)
Claim 6
Kubo in view of Mullis teaches the features of claim 1 and further teaches:
The information processing apparatus according to Claim 1, wherein the second machine learning model uses fourth information on actual lens performance of the first interchangeable lens or the reference interchangeable lens as teacher data. (Kubo [0068] “The learning model storage unit 14 is a storage unit that stores the learning model constructed by the learning unit 13. When new training data is acquired after the learning model was constructed, the supervised learning may be added to the learning model stored in the learning model storage unit 14 and supervised learning may be performed additionally so that the learning model already constructed is updated appropriately.” – The machine learning model is trained with updated data when available. This would have included, as indicated before, input data and a label.)
Claim 7
Kubo in view of Mullis teaches the features of claim 6 and further teaches:
The information processing apparatus according to Claim 6, wherein the fourth information includes at least one of optical performance information, operation performance information, dust-proof or drip-proof performance information, and appearance or operation state information of the first interchangeable lens or the reference interchangeable lens. (Kubo [0027] “In this manner, the machine learning device 10 performs supervised learning which uses the use data related to the use of the focusing lens 21 as well as the image data obtained by imaging the focusing lens 21 as part of the input data to construct a learning model. Due to this, the constructed learning model is a learning model capable of judging the quality of the optical component by taking the use of the optical component into consideration.– Each time the model is trained, it uses labeled data, the label of which includes “good” or “defective,” which represent operation performance. [0005] “Patent Document 1 discloses an example of a technique related to quality judgment of optical components. In the technique disclosed in Patent Document 1, a colored projection unit that projects a laser beam having passed through a lens is provided so that the shadow of dust adhering to the lens can be projected to the projection unit and can be visually perceived. In this way, the presence of dust adhering to the lens through which a laser beam passes can be easily visually perceived (for example, see [Abstract] and Paragraphs [0024] to [0026] of Specification of Patent Document 1).” – This teaches accounting for a dust-proof metric.)
Claim 8
Kubo in view of Mullis teaches the features of claim 7 and further teaches:
The information processing apparatus according to Claim 7, wherein the first information or the fourth information is acquired when a predetermined inspection is performed after shipment of the first interchangeable lens to the user. (Kubo [0045] “In a case in which spatters cannot be removed completely, it is necessary to judge the quality of the focusing lens 21 in order to determine whether the focusing lens 21 after the cleaning is to be used again. [0061]-[0062] “The label acquisition unit 12 is a part that acquires the evaluation value from the laser machine 20 as a label and outputs the acquired label to the learning unit 13. Here, the evaluation value in the present embodiment is an evaluation value related to quality judgment and is a value indicating whether the focusing lens 21 can be used as it is (that is, ‘good’) or the focusing lens 21 needs to be replaced (that is, ‘defective’). The evaluation value is determined on the basis ofthe judgment of a user who observes the focusing lens 21 detached from the laser machine 20. The user inputs the determined evaluation value to the laser machine 20 or the machine learning device 10, for example. The label acquisition unit 12 acquires the input evaluation value. Since it is desirable that the evaluation value is accurate, it is desirable that an expert operator makes judgment for determining the evaluation value.” The label is applied to lenses after use, in order for the trained machine learning model to identify the defects that come from use.)
Claim 10
Kubo in view of Mullis teaches the features of claim 1 and further teaches:
The information processing apparatus according to Claim 1, wherein the first information further includes sixth information including at least one of optical performance information, operation performance information, dust-proof or drip-proof performance information, and appearance or operation state information of the first interchangeable lens. (Kubo [0025] “The machine learning device 10 is a device that performs machine learning on the focusing lens 21 to
construct a learning model for judging the quality of the focusing lens 21. The machine learning by the machine learning device 10 is performed by supervised learning
using training data which uses image data obtained by imaging the focusing lens 21 and data related to the use of the focusing lens 21 as input data and an evaluation value related to the quality judgment of the focusing lens 21 as a label.” – The data input into the machine learning model includes optical performance information, operation performance information, and appearance or operation state information of the one or more lens units. [0005] “Patent Document 1 discloses an example of a technique related to quality judgment of optical components. In the technique disclosed in Patent Document 1, a colored projection unit that projects a laser beam having passed through a lens is provided so that the shadow of dust adhering to the lens can be projected to the projection unit and can be visually perceived. In this way, the presence of dust adhering to the lens through which a laser beam passes can be easily visually perceived (for example, see [Abstract] and Paragraphs [0024] to [0026] of Specification of Patent Document 1).” – This teaches accounting for a dust-proof metric.)
Claim 11
Kubo in view of Mullis teaches the features of claim 2 and further teaches:
The information processing apparatus according to Claim 2, wherein the third information includes at least one of optical performance information, operation performance information, dust-proof or drip-proof performance information, and appearance or operation state information of the target interchangeable lens. (Kubo [0027] “In this manner, the machine learning device 10 performs supervised learning which uses the use data related to the use of the focusing lens 21 as well as the image data obtained by imaging the focusing lens 21 as part of the input data to construct a learning model. Due to this, the constructed learning model is a learning model capable of judging the quality of the optical component by taking the use of the optical component into consideration.– The second learning model, which is trained based on the first and second information, outputs an estimate of the lens performance. This is just the use of the machine learning model to make an inference regarding performance. (e.g., ‘good’ or ‘defective’).)
Claim 12
Kubo in view of Mullis teaches the features of claim and further teaches:
The information processing apparatus according to Claim 2, wherein the processor is further configured to execute the program to perform a lens quality determination on the target interchangeable lens based on the third information. (Kubo [0027] “In this manner, the machine learning device 10 performs supervised learning which uses the use data related to the use of the focusing lens 21 as well as the image data obtained by imaging the focusing lens 21 as part of the input data to construct a learning model. Due to this, the constructed learning model is a learning model capable of judging the quality of the optical component by taking the use of the optical component into consideration. [0064] “In the present embodiment, the output of the neural network is classified into two classes of "good" and "defective", and a probability that the output is classified to a certain class is output. Forward propagation is performed such that the value of a probability of the quality of the focusing lens 21 output by the neural network (for example, a value of the probability of 90% that the quality is "good") is the same as the evaluation value of the label (for example, when the label indicates "good" in the quality, the value of the probability of "good" output by the neural network is 100%).” – The second learning model, which is trained based on the first and second information, outputs an estimate of the lens performance as probability (third information). Then, a determination of ‘good’ or ‘defective’ (lens quality determination) is made based on this probability. This is just the use of the machine learning model to make an inference regarding performance. (e.g., ‘good’ or ‘defective’).)
Claim 16
Regarding claim 16, Kubo teaches:
An information processing system comprising a memory configured to store a program; and a processor communicatively connected to the memory and configured to execute the program to: (Kubo [0070] “Hereinabove, the functional blocks included in the machine learning device 10 have been described. In order to realize these functional blocks, the machine learning device 10 includes an arithmetic processing device such as a central processing unit (CPU). Moreover, the machine learning device 10 includes an auxiliary storage device such as a hard disk drive (HDD) storing various control programs such as application software and an operating system (OS) and a main storage device such as a random access memory (RAM) for storing data which is temporarily necessary for an arithmetic processing device to execute programs.” [0072] “However, since the machine learning device 10 involves a large amount of arithmetic operations associated with supervised learning, the supervised learning may be processed at a high speed, for example, when a graphics processing unit (GPU) is mounted on a personal computer and the GPU is used for arithmetic processing associated with the supervised learning according to a technique called general-purpose computing on graphics processing units (GPGPU).” – A processing system with multiple cores that can each include multiple units (e.g., processing units, acquisition units, determination units, and any other computing units).)
acquire first information on a usage history of a first interchangeable lens , the first interchangeable lens having at least one optical element, after usage of the first interchangeable lens by a user, the usage history including information on an operating environment of the first interchangeable lens, (Kubo [0025]-[0026] “The machine learning by the machine learning device 10 is performed by supervised learning using training data which uses image data obtained by imaging the focusing lens 21 and data related to the use of the focusing lens 21 as input data and an evaluation value related to the quality judgment of the focusing lens 21 as a label. Here, the data related to the use of the focusing lens 21 includes, for example, data indicating the characteristics of a laser incident on the focusing lens 21 during laser processing performed by the laser machine 20, data indicating the characteristics of a target work radiated with a laser during laser processing, and data indicating the characteristics required for the laser processing.” Also See FIG. 4 (Shown below)– The usage history of focusing lenses is collected to train a machine learning model. [0044]-[0045] “Therefore, it is necessary to perform cleaning of the focusing lens 21 periodically as part of maintenance. However, there are cases in which the spatters adhering to the focusing lens 21 can be removed completely by cleaning and cannot be removed completely. In a case in which spatters cannot be removed completely, it is necessary to judge the quality of the focusing lens 21 in order to determine whether the focusing lens 21 after the cleaning is to be used again. However, as described in Related Art, the quality judgment was conventionally performed on the basis of user's rule of thumb and it was difficult to set a quantitative threshold. Moreover, conventionally, the use of the focusing lens 21 after cleaning has not been sufficiently taken into consideration.” See FIGs. 3A and 3B – The training data includes data from lenses that can be cleaned and reused and lenses that are permanently damaged and need to be replaced (e.g., includes a first lens and a target lens).)
acquire second information in which on an expected lens performance of the first interchangeable lens or a reference interchangeable lens different from the first interchangeable lens and the target interchangeable lens, prior to the usage of the first interchangeable lens or the target interchangeable lens by the user; and (Kubo [0063] “The learning unit 13 receives a pair of the input data and the label as training data and performs supervised learning using the training data to construct a learning model.” [0021] “FIG. 2 is a vertical cross-sectional view schematically illustrating a configuration of a laser machine according to an embodiment of the present invention. FIG. 3A is a schematic plan view when a focusing lens (with no spatter adhering thereto) according to an embodiment of the present invention is seen in the same axial direction as a laser beam.” [0042] “FIG. 3A illustrates a state in which the focusing lens 21 is not used and no spatter adheres to the focusing lens 21. In this state, focusing of the focusing lens 21 can be performed appropriately and laser processing can be executed appropriately.” – The label data includes labels of lenses before use. [0044]-[0045] “Therefore, it is necessary to perform cleaning of the focusing lens 21 periodically as part of maintenance. However, there are cases in which the spatters adhering to the focusing lens 21 can be removed completely by cleaning and cannot be removed completely. In a case in which spatters cannot be removed completely, it is necessary to judge the quality of the focusing lens 21 in order to determine whether the focusing lens 21 after the cleaning is to be used again. However, as described in Related Art, the quality judgment was conventionally performed on the basis of user's rule of thumb and it was difficult to set a quantitative threshold. Moreover, conventionally, the use of the focusing lens 21 after cleaning has not been sufficiently taken into consideration.” See FIGs. 3A and 3B – The training data includes data from lenses that can be cleaned and reused and lenses that are permanently damaged and need to be replaced (e.g., includes a first lens and a target lens).)
PNG
media_image1.png
764
1366
media_image1.png
Greyscale
acquire a first machine learning model; and provide the first information on the operating environment of the first lens interchangeable lens and the second information on the expected lens performance of the first interchangeable lens or the reference interchangeable lens as input data to the first machine learning model and to generate a second machine learning model for estimating a deterioration in the expected lens performance of the target interchangeable lens after the usage of the target interchangeable lens by the user, in which parameters of the first machine learning model are adjusted to permit the estimating of the deterioration in the expected lens performance of the target interchangeable lens based on the operating environment of the target interchangeable lens; (Kubo [0063] “The learning unit 13 receives a pair of the input data and the label as training data and performs supervised learning using the training data to construct a learning model. For example, the learning unit 13 performs supervised learning using a neural network. In this case, the learning unit 13 performs forward propagation in which the pair of the input data and the label included in the training data is input to a neural network formed by combining perceptrons and the weighting factors for the respective perceptrons included in the neural network are changed so that the output of the neural network is the same as the label.” – The first and second data are used with the labels of the quality of the lenses to perform supervised learning, which includes modifying the weights of a model to yield another model. [0027] “Due to this, the constructed learning model is a learning model capable of judging the quality of the optical component by taking the use of the optical component into consideration.” – The trained ML model determines whether there is deterioration of the lens. [0044]-[0045] “Therefore, it is necessary to perform cleaning of the focusing lens 21 periodically as part of maintenance. However, there are cases in which the spatters adhering to the focusing lens 21 can be removed completely by cleaning and cannot be removed completely. In a case in which spatters cannot be removed completely, it is necessary to judge the quality of the focusing lens 21 in order to determine whether the focusing lens 21 after the cleaning is to be used again. However, as described in Related Art, the quality judgment was conventionally performed on the basis of user's rule of thumb and it was difficult to set a quantitative threshold. Moreover, conventionally, the use of the focusing lens 21 after cleaning has not been sufficiently taken into consideration.” See FIGs. 3A and 3B – The training data includes data from lenses that can be cleaned and reused and lenses that are permanently damaged and need to be replaced (e.g., includes a first lens and a target lens).)
generate third information on an estimated lens performance of the target interchangeable lens after usage of the target interchangeable lens by the user, based on the first information on the operating environment of the target interchangeable lens, the second information on the expected lens performance of the target interchangeable lens, and the second machine learning model; and (Kubo [0027] “In this manner, the machine learning device 10 performs supervised learning which uses the use data related to the use of the focusing lens 21 as well as the image data obtained by imaging the focusing lens 21 as part of the input data to construct a learning model. Due to this, the constructed learning model is a learning model capable of judging the quality of the optical component by taking the use of the optical component into consideration. See FIG. 6 (right)– The second learning model, which is trained based on the first and second information, outputs an estimate of the lens performance. This is just the use of the machine learning model to make an inference. [0044]-[0045] “Therefore, it is necessary to perform cleaning of the focusing lens 21 periodically as part of maintenance. However, there are cases in which the spatters adhering to the focusing lens 21 can be removed completely by cleaning and cannot be removed completely. In a case in which spatters cannot be removed completely, it is necessary to judge the quality of the focusing lens 21 in order to determine whether the focusing lens 21 after the cleaning is to be used again. However, as described in Related Art, the quality judgment was conventionally performed on the basis of user's rule of thumb and it was difficult to set a quantitative threshold. Moreover, conventionally, the use of the focusing lens 21 after cleaning has not been sufficiently taken into consideration.” See FIGs. 3A and 3B – The training data includes data from lenses that can be cleaned and reused and lenses that are permanently damaged and need to be replaced (e.g., includes a first lens and a target lens).)
perform a predetermined determination on the target interchangeable lens based on the third information. (Kubo [0027] “In this manner, the machine learning device 10 performs supervised learning which uses the use data related to the use of the focusing lens 21 as well as the image data obtained by imaging the focusing lens 21 as part of the input data to construct a learning model. Due to this, the constructed learning model is a learning model capable of judging the quality of the optical component by taking the use of the optical component into consideration. [0064] “In the present embodiment, the output of the neural network is classified into two classes of "good" and "defective", and a probability that the output is classified to a certain class is output. Forward propagation is performed such that the value of a probability of the quality of the focusing lens 21 output by the neural network (for example, a value of the probability of 90% that the quality is "good") is the same as the evaluation value of the label (for example, when the label indicates "good" in the quality, the value of the probability of "good" output by the neural network is 100%).” – The second learning model, which is trained based on the first and second information, outputs an estimate of the lens performance as probability (third information). Then, a determination of ‘good’ or ‘defective’ (lens quality determination) is made based on this probability. This is just the use of the machine learning model to make an inference regarding performance. (e.g., ‘good’ or ‘defective’). [0044]-[0045] “Therefore, it is necessary to perform cleaning of the focusing lens 21 periodically as part of maintenance. However, there are cases in which the spatters adhering to the focusing lens 21 can be removed completely by cleaning and cannot be removed completely. In a case in which spatters cannot be removed completely, it is necessary to judge the quality of the focusing lens 21 in order to determine whether the focusing lens 21 after the cleaning is to be used again. However, as described in Related Art, the quality judgment was conventionally performed on the basis of user's rule of thumb and it was difficult to set a quantitative threshold. Moreover, conventionally, the use of the focusing lens 21 after cleaning has not been sufficiently taken into consideration.” See FIGs. 3A and 3B – The training data includes data from lenses that can be cleaned and reused and lenses that are permanently damaged and need to be replaced (e.g., includes a first lens and a target lens).)
Kubo suggests including use data related to the use of a focusing lens as an input data for determining whether the focusing lens is defective (Kubo [0026]-[0027] “In the following description, the data related to the use of the focusing lens 21 will be referred to as “use data”. In this manner, the machine learning device 10 performs supervised learning which uses the use data related to the use of the focusing lens 21 as well as the image data obtained by imaging the focusing lens 21 as part of the input data to construct a learning model.”), but Kubo does not appear to explicitly teach, but Kubo in view of Mullis teaches:
acquire first information on a usage history of a first interchangeable lens attachable to a camera body to capture an image, the first interchangeable lens having at least one optical element, after usage of the first interchangeable lens by a user, the usage history including information on an operating environment of the first interchangeable lens, the information on the operating environment including at least one of a drive or control information, temperature information, humidity information, geographic position information indicating a geographic location of the first interchangeable lens, and information about an external force applied to the first interchangeable lens; (Mullis [0023] “The array camera module 102 includes an array of imaging components 104 formed by a sensor and a lens stack array and the array camera module 102 is configured to communicate with a processor 108.” [0030] “A Calibration process (200) can involve performing a defect detection process (205), a photometric calibration process (210), a geometric calibration process (215), and/or a MTF calibration process (220). One skilled in the art will recognize that a calibration process may include other processes for detecting other types of defects and/or to collect information about other aspects of the array camera and the underlying imaging components if other types of information are needed to perform the image processing algorithms without departing from this invention.” [0030] “Furthermore, calibration process 200 and/or each of the individual calibration/error detection processes included in process 200 may be performed one or more times at each of several predetermined temperatures and the results may be aggregated into calibration information that is indexed by temperature in accordance with a number of embodiments of this invention. In accordance with these embodiments, the temperature of the environment and/or the array camera being tested at the time of an iteration of the process 200 is optionally determined (202). The temperature may be measured by a connected device such as, but not limited to, a dark sensor or may be input by an operator depending on the embodiment. The determined temperature is associated with the information generated during the iteration. Furthermore, process 200 may determine whether additional iterations need to be performed at other temperatures (225). If so, process 200 is repeated after the temperature of a test environment is adjusted. Otherwise process 200 ends.” – Mullis teaches detecting defects in a camera array that includes a lens, and it determines the defects in the camera based on measured temperature data.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the environmental use data for input into a defect detection model for a lens of Kubo by the defect detection of the camera with a lens using temperature data in Mullis because the person of ordinary skill in the art would be motivated by the aim of automated detection of lens defects for an optical lens based on data representing the context in which the focusing lens is used to improve the optical performance, to look to Mullis that detects image defects due to temperature in the environment in such a way as to improve the optical performance of the camera/lens. (Kubo [0008] “Therefore, an object of the present invention is to provide a machine learning device, a machine learning system, and a machine learning method for judging the quality of optical components by taking the use of optical components into consideration.” [0053] “The use data includes, for example, any one or all of the data indicating the characteristics of a laser incident on the focusing lens 21 during laser processing, the data indicating the characteristics of a target work radiated with a laser during laser processing, and the data indicating the characteristics required for laser processing.” ; Mullis [0028] “During a manufacture process and/or periodically during the life of an array camera, a calibration process may be performed to generate or update information that can be utilized to perform the super-resolution processing algorithms. A calibration process to collect the calibration information in accordance with embodiments of this invention is illustrated in FIG. 2.” Abstract “Systems and methods for calibrating an array camera are disclosed. Systems and methods for calibrating an array camera in accordance with embodiments of this invention include the detecting of defects in the imaging components of the array camera and determining whether the detected defects may be tolerated by image processing algorithms. The calibration process also determines translation information between imaging components in the array camera for use in merging the image data from the various imaging components during image processing. Furthermore, the calibration process may include a process to improve photometric uniformity in the imaging components.”)
Claims 17 and 18
Regarding claim 17, Kubo teaches:
An inspection apparatus comprising: a memory configured to store a program; and a processor communicatively connected to the memory and configured to execute the program to: (Kubo [0092] “In the respective embodiments described above, although the functions included in each of the machine learning device 10, the laser machine 20, and the imaging device 30 are realized by separate devices, some or all of these functions may be realized by an integrated device.” [0088] “Each of the devices included in the machine learning system can be realized by hardware, software, or a combination thereof. Moreover, the machine learning method performed by the cooperation of the respective devices included in the machine learning system can be realized by hardware, software, or a combination thereof. Here, being realized by software means being realized when a computer reads and executes a program.” [0085]-[0086] “In step S23, the learning unit 13 inputs the respective pieces of data input in steps S21 and S22 to the learned learning model stored in the learning model storage unit 14 as input data. The learning unit 13 outputs the output of the learning model corresponding to this input to the output presenting unit 15. The output presenting unit 15 presents the output of the learning model input from the learning unit 13 to the user as the result of the quality judgment. By the operations described above, the machine learning device 10 can judge the quality of optical components by taking the use of the optical components into consideration. [0070] “Hereinabove, the functional blocks included in the machine learning device 10 have been described. In order to realize these functional blocks, the machine learning device 10 includes an arithmetic processing device such as a central processing unit (CPU). Moreover, the machine learning device 10 includes an auxiliary storage device such as a hard disk drive (HDD) storing various control programs such as application software and an operating system (OS) and a main storage device such as a random access memory (RAM) for storing data which is temporarily necessary for an arithmetic processing device to execute programs.” [0072] “However, since the machine learning device 10 involves a large amount of arithmetic operations associated with supervised learning, the supervised learning may be processed at a high speed, for example, when a graphics processing unit (GPU) is mounted on a personal computer and the GPU is used for arithmetic processing associated with the supervised learning according to a technique called general-purpose computing on graphics processing units (GPGPU).” – A processing system with multiple cores (CPU and GPU) that can each include multiple units (e.g., processing units, acquisition units, determination units, and any other computing units).)
acquire the second machine learning model generated by the information processing apparatus according to claim 1, first information on a usage history of the target interchangeable lens, and the second information on the expected lens performance of the target interchangeable lens; and (Kubo See FIG. 4 (shown again below)– The trained (second) machine learning model is transmitted between inspection devices, so the processor/GPU and/or one of their cores receives/acquires this information. [0093] “Moreover, one machine learning device 10 may be connected to a plurality of laser machines 20 and a plurality of imaging devices 30. Moreover, one machine learning device 10 may perform learning on the basis of the training data acquired from a plurality of laser machines 20 and a plurality of imaging devices 30. Furthermore, in the above described embodiments, although one machine learning device 10 is illustrated, a plurality of machine learning devices 10 may be present. That is, the relation between the machine learning device 10 and the laser machine 20 and the imaging device 30 may be one-to-one relation and may be one-to-multiple relation or multiple-to-multiple relation.” [0095] “As described in Modification 1, when a plurality of machine learning devices 10 is present, a learning model stored in the learning model storage unit 14 of any one of the machine learning devices 10 may be shared between other machine learning devices 10. When the learning model is shared between a plurality of machine learning devices 10, since supervised learning can be performed by the respective machine learning devices 10 in a distributed manner, the efficiency of supervised learning can be improved.” – Any training data and/or labels (first, second, third, […] billionth information) and any machine learning model trained by any of the machine learning devices can be shared to allow for distributed training.)
PNG
media_image3.png
822
1367
media_image3.png
Greyscale
generate third information on an estimated lens performance of the target interchangeable lens after usage of the target interchangeable lens by the user, based on the first information, the second information, and the second machine learning model. (Kubo [0027] “In this manner, the machine learning device 10 performs supervised learning which uses the use data related to the use of the focusing lens 21 as well as the image data obtained by imaging the focusing lens 21 as part of the input data to construct a learning model. Due to this, the constructed learning model is a learning model capable of judging the quality of the optical component by taking the use of the optical component into consideration. See FIG. 6 (right)– The second learning model, which is trained based on the first and second information, outputs an estimate of the lens performance. This is just the use of the machine learning model to make an inference. The third information could be the probability output from the machine learning model or the determination of ‘good’ or ‘defective’ based thereon.)
Regarding claim 18, claim 18 recites the method executed by the apparatus of claim 17 and is rejected for at least the same reasons as claim 17.
Claim 9: Kubo, Mullis, and Montminy
Claim 9 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over US 2019/0061049 A1 to Kubo (Kubo) in view of US 2018/0040135 A1 to Mullis (Mullis) and US 2007/0126869 A1 to Montminy et al. (Montminy).
Claim 9
Kubo and Mullis teach the features of claim 4. Kubo teaches that the performance of optical components degrades with age and contamination, but appears to fail to explicitly teach, but Montminy teaches:
The information processing apparatus according to Claim 4, wherein the second information further includes fifth information including at least any one of design-time information, manufacturing-time information, and catalog data of the first interchangeable lens or the reference interchangeable lens. (Note: Catalog data is not a term of art, and the specification provided little guidance as to the metes and bounds of the term. Accordingly, the term catalog is being interpreted to include any information about the camera that might be found in a catalog, including camera health parameters used in a self-diagnostic system. Montminy [0032] “The term “camera health record” is used herein to represent data that characterizes the health and operation of a camera, such as a digital video camera, in a video surveillance system. Each record can be based on or include a set of reference image data acquired from the camera during operation. Stored camera health records characterize known states of normal camera operation. A camera health record preferably includes a plurality of “camera health parameters”, each of which can be compared to stored camera health parameters to detect a camera malfunction.” – Camera health parameters are interpreted to be catalog data of the lens unit (or camera attached thereto).)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the lens use data used as input to the machine learning model of Kubo by the computed camera health record of Montminy because a person of ordinary skill in the art would be motivated, based on the desire in Kubo to automate the inspection process based on relevant data to the health of the lens as it ages or is contaminated to look to the camera health measurement records of Montiny, which provide an automated system that can continuously, or substantially continuously, monitor the health of an optical device after use, in a manner that saves time. (Kubo [0003] “Optical components used in industrial laser machines are contaminated with dirt or are deteriorated with aging. The absorptivity of a laser beam changes due to the dirt or deterioration and a desired performance is not obtained.” [0080] “By the operations described above, the learning unit 13 performs supervised learning using the use data related to the use of the focusing lens 21 and the image data as input data to construct a learning model. In this way, it is possible to construct a learning model for performing quality judgment of the focusing lens 21 by taking the use of the focusing lens 21 into consideration.” [0086] “By the operations described above, the machine learning device 10 can judge the quality of optical components by taking the use of the optical components into consideration. Moreover, the user can determine whether it is necessary to replace the focusing lens 21 or the like by referring to the presented result of quality judgment. In this way, it is possible to automate quality judgment without requiring the user's judgment based on visual observation which was conventionally performed whenever judgment is performed. Moreover, it is possible to model the conventional obscure judgment criteria and to indicate the judgment results as numerical values.”; Montminy [0009] “The present invention addresses problems of conventional approaches by having an automated system that can continuously, or substantially continuously, monitor the health of each camera to a detect a camera malfunction which is due to either external or internal conditions, as described earlier.” [0030] “An advantage of the invention over known manual approaches is a significant time saving, more considerable as the number of cameras in the video surveillance system increases.”)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
(From A Prior Office Action)
NPL: “11 Ways to Improve the Sharpness of Your Images, Part 1” by Daniel (Daniel teaches that optical design of a lens and missed focus are parameters that affect image sharpness. It would be obvious to use these as input or output parameters in a machine learning model.)
NPL: “11 Ways to Improve the Sharpness of Your Images, Part 2” by Daniel (Daniel teaches that depth of field and camera shake are parameters that affect image sharpness. It would be obvious to use these as input or output parameters in a machine learning model.)
NPL: “11 Ways to Improve the Sharpness of Your Images, Part 3” by Daniel (Daniel teaches that noise and atmospheric distortion are parameters that affect image sharpness. It would be obvious to use these as input or output parameters in a machine learning model.)
NPL: “11 Ways to Improve the Sharpness of Your Images, Part 4” by Daniel (Daniel teaches that mirror slap, shutter vibration, and diffraction are parameters that affect image sharpness. It would be obvious to use these as input or output parameters in a machine learning model.)
NPL: “11 Ways to Improve the Sharpness of Your Images, Part 5” by Daniel (Daniel teaches that image stabilization is a parameter that affects image sharpness. It would be obvious to use this as an input or output parameter in a machine learning model.)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAY MICHAEL WHITE whose telephone number is (571)272-7073. The examiner can normally be reached Mon-Fri 11:00-7:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro, can be reached at 571-272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.W./Examiner, Art Unit 2188
/RYAN F PITARO/Supervisory Patent Examiner, Art Unit 2188