Detailed Action
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
2. Applicant’s election of Group I in the reply filed on 1-10-26 is acknowledged. Because applicant did not distinctly and specifically point out the supposed errors in the restriction requirement, the election has been treated as an election without traverse (MPEP § 818.01(a)).
Claim 20 is withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 1-10-26.
Claim Rejections - 35 USC § 101
3. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 and 3 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) a computer implemented method of gathering information about an object and then providing instructions via the computer to process the object. This judicial exception is not integrated into a practical application because the claims are directed to nothing significantly more than applying instructions related to an abstract idea related to a generic object via a regular computer to provide generic processing of the object. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims recite a computing device which is well known in the art and the computing device is not critical to the patentability of the claim since it is well known to use computer controls in processing workpieces/objects and further, the claim does not positively recite the machine for processing the workpiece/object and the claim language related to the machine for processing does not provide specific structure and function for this machine.
Claim Interpretation
4. The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Regarding claim 1, applicant has invoked 35 U.S.C. 112(f) means plus function analysis with respect to the claimed computing device and as seen in applicant’s originally filed disclosure in paragraphs [0037], [0042]-[0043], [0067] , [0081]-[0097] and [0099] thru [0108], the computing device is detailed as an edge computing device, high power computing device, a single machine computing device, multiple computing devices, circuitry, sensor data pre-processing engine, a local computing device, a machine computing device having a processor, communication interface, computer readable medium and data store, desktop computing device, laptop computing device, mobile computing device, server computing device, cloud computing device, a data processing computing device, and a NVIDIA Jetson Orin package, such as an Advantech MIC-711-OX.
Claim Rejections - 35 USC § 112
5. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Applicant has invoked 35 U.S.C. 112(f) means plus function analysis with respect to the claimed computing device as detailed earlier in paragraph 3 of this office action and the terms/phrases of applicant’s originally filed specification being, “for instance” in paragraph [0067], “in some examples”, “any suitable” and “including but not limited to” in paragraph [0086], “in some examples” and “including but not limited to” in paragraph [0087], “including but not limited to” in paragraph [0088], “any suitable” and “any other suitable” in paragraph [0090], “e.g.” in paragraph [0099], “such as” in paragraph [0101], “such as”, “e.g.” and “etc.” in paragraph [0102], “such as” in paragraph [0103], “in some examples” and “in other examples” in paragraph [0104], “such as”, “e.g.” and “etc.” in paragraph [0106], “and specifically” and “such as” in paragraph [0107], and “for instance” in paragraph [0108], render the claim indefinite in that it is unclear to whether other types of computing devices then those disclosed are being contemplated by the claim. Further, the list of indefinite language in applicant’s specification detailed earlier provides examples of indefinite language and may not encompass all indefinite language related to the claimed computing device. Further, it is unclear whether “a computing device” in line 4 of claim 1 is the same or different than “a computing device” in line 3 of claim 1. Further, it is unclear to whether “a computing device” in line 7 is the same or different than “a computing device” in line 3 and/or “another computing device” in line 5. Further, it is unclear to whether “a computing device” in line 10 is the same or different than “a computing device” in line 3 and/or “another computing device” in line 5. Further, it is unclear to whether “a computing device” in lines 11-12 is the same or different than “a computing device” in line 3 and/or “another computing device” in line 5. Further, it is unclear to whether “the processed output” in line 12 is the same or different than “the output” in line 10. Further, claim 1 lacks antecedent basis for “the output” in line 10 and “the processed output” in line 12.
Claim 2 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Given the 112(b) rejections of claim 1 detailed earlier, it is unclear to whether the edge computing device of claim 2, is the same or different than the computing device detailed in lines 3, 4 and/or 10 of claim 1, and the same or different than the another computing device in line 5 of claim 1. Further, is it unclear to whether the machine computer of claim 2, is the same or different than the computing device in lines 3, 4 and/or 10 of claim 1, and the same or different than the another computing device in line 5 of claim 1.
Claim 3 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear to whether each of the computing devices detailed in lines 2 and 4 of claim 3, are the same or different than the computing devices in lines 3, 4, 7, 10 and/or 11/12 of claim 1 and are the same or different than the another computing device in line 5 of claim 1.
Claim 4 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear to whether the image detailed in line 6 of claim 4 is the same or different than the image detailed in line 2 of claim 4.
Claim 7 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear to whether the computing device detailed in each of lines 2, 4 and 6 of claim 7 is the same or different than each other, the same or different than the computing device detailed in lines 3, 4, 7, 10 and/or 11-12 of claim 1 and is the same or different than the another computing device in line 5 of claim 1.
Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear to whether the computing device detailed in each of lines 2, 4, 7, 10, 13 and 16-17 of claim 8 is the same or different than each other, the same or different than the computing device detailed in lines 3, 4, 7, 10 and/or 11-12 of claim 1 and is the same or different than the another computing device in line 5 of claim 1. Further, it is unclear to whether the first output and the second output in claim 8 are the same or different than the output and the processed output in claim 1.
Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear to whether the computing device in line 1 of claim 9 is the same or different than the computing device detailed in lines 3, 4, 7, 10 and/or 11-12 of claim 1 and is the same or different than the another computing device in line 5 of claim 1. Further, it is unclear to whether the 3D model detailed in lines 1-2 of claim 9 is the same or different than 3D model detailed in line 13 of claim 8.
Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear to how an image of a prior workpiece equates to the second surface of the workpiece since the image of the prior workpiece does not include a surface of the current workpiece being imaged. Further, the prior cut workpiece renders the claim indefinite in that it is unclear to how the prior workpiece is cut since there is no cutting operation detailed in claim 10 and the claims from which claim 10 depends from.
Claim 11 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear to whether the computing device in line 2 of claim 11 is the same or different than the computing device detailed in lines 3, 4, 7, 10 and/or 11-12 of claim 1 and is the same or different than the another computing device in line 5 of claim 1.
Claim 12 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear to whether the computing device in lines 1-2 of claim 12 is the same or different than the computing device detailed in lines 3, 4, 7, 10 and/or 11-12 of claim 1 and is the same or different than the another computing device in line 5 of claim 1.
Claim 13 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear to whether the computing device in lines 1-2 of claim 13 is the same or different than the computing device detailed in lines 3, 4, 7, 10 and/or 11-12 of claim 1 and is the same or different than the another computing device in line 5 of claim 1.
Claim 14 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear to whether the computing device in line 1 of claim 14 is the same or different than the computing device detailed in lines 3, 4, 7, 10 and/or 11-12 of claim 1 and is the same or different than the another computing device in line 5 of claim 1.
Claim 15 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear to whether the computing device in line 2 of claim 15 is the same or different than the computing device detailed in lines 3, 4, 7, 10 and/or 11-12 of claim 1 and is the same or different than the another computing device in line 5 of claim 1.
Claim 16 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear to whether the image detailed in line 3 of claim 16 is the same or different than the image detailed in line 1 of claim 16.
Claim 17 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear to whether the substantially largest inscribing circle in claim 17 is the same or different than the substantially largest inscribing circle detailed in parent claim 16. Further, it is unclear to whether the image detailed in line 2 is the same or different than the image detailed in parent claim 16. Further, it is unclear to whether the image detailed in line 3 of claim 17 is the same or different than the image detailed in line 2 of claim 17. Further, it is unclear to whether the fatty region detailed in claim 17 is the same or different than the fatty region detailed in parent claim 16. Further, it is unclear to whether the steak detailed in claim 17 is the same or different than the steak detailed in parent claim 16. Further, it is unclear to whether the sciatic nerve detailed in claim 17 is the same or different than the sciatic nerve detailed in parent claim 16.
Claim Rejections - 35 USC § 102
6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 3-4 and 19 is/are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by U.S. Patent Application Publication No. 2022/0026269 to Villerup et al.
Referring to claim 1, Villerup et al. discloses a computer-implemented method of optimizing machine processing of a workpiece, the method comprising, receiving, by a computing device – at 1108, at least one sensor input regarding a workpiece – see input from imaging sensors – at 1104,1105 in figures 11-12 and see paragraphs [0043]-[0044] and [0072]-[0073], performing, by a computing device – at 1108, pre-processing of the at least one sensor input – via 1106,1107, for at least one of efficient transfer to another computing device – not required by the claim given the “at least one of” phrase, and optimal use in one or more machine learning models – at 11-7, executing, by a computing device – at 1108, one or more machine learning models – at 1107, to output requested information – at 1109,1209, regarding the workpiece based on data in the at least one sensor input – see figures 11-12 and paragraphs [0042]-[0044] and [0072]-[0073], receiving and processing, by a computing device – at 1108, the output – at 1109,1209, and controlling at least one aspect of the machine processing of the workpiece – see via 1211, by a computing device – at 1108, in response to the processed output – at 1109,1209 – see figures 11-12 and paragraphs [0072]-[0073].
Referring to claim 3, Villerup et al. further discloses verifying, by a computing device – at 1108, a machine learning model output – at 1107,1109, corresponds to the at least one sensor input – at 1104,1105 – see figures 11-12 and paragraphs [0043]-[0044] and [0072]-[0073], and identifying, by a computing device – at 1108, the machine learning model – at 1107, with a unique identifier in a communication including the at least one sensor input – at 1104,1105 – see weight and density identifiers detailed in paragraphs [0043]-[0044] and [0072]-[0073].
Referring to claim 4, Villerup et al. further discloses the one or more machine learning models – at 1107, after receiving at least one image – via 1104, of the workpiece as input, are configured to perform at least one of, generating at least one of workpiece classification and a classification probability score of at least one possible type of workpiece for the workpiece, generating a region of interest in an image of the workpiece – region of interest being where to cut the workpiece as seen at 1211,1212 in figure 12 and paragraph [0073], and generating an outline in an image of the workpiece of at least one object or feature of the workpiece.
Referring to claim 19, Villerup et al. further discloses generating an outline in an image of the workpiece of at least one object or feature of the workpiece includes at least one of,
outlining at least one of a bone(s), a fat/lean boundary, an edge of the workpiece, a perimeter of the workpiece, a bottom surface of the workpiece, and cut lines of the workpiece – see edge, perimeter and cut lines as seen in figures 11-12 and paragraphs [0043]-[0044] and [0072]-[0073], and outputting a multi-class output image including outlines of at least two types of features – see weight and density features in figures 11-13 and paragraphs [0043]-[0044] and [0072]-[0075].
Claim Rejections - 35 USC § 103
7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Villerup et al. as applied to claims 1 or 4 above.
Referring to claim 2, Villerup et al. further discloses execution of the one or more machine learning models – at 1107, is carried out by a computing device – at 1108, and wherein controlling at least one aspect of the machine processing of the workpiece – at 1101,1211,1212, in response to the processed output is carried out by a machine computer – at 1108, of a workpiece processing system – at 1211, configured to carry out at least one aspect of processing the workpiece – at 1101,1212 – see figures 11-12 and paragraphs [0073]-[0074]. Villerup et al. does not disclose the learning models are carried out by an edge computing device. However, it would have been obvious to one of ordinary skill in the art to take the method of Villerup et al. and use any suitable computing device for carrying out the learning models including the edge computing device, so as to yield the predictable result of more quickly and accurately automatically processing the workpiece as desired.
Referring to claim 18, Villerup et al. does not disclose the one or more machine learning models configured to generate a region of interest in an image of the workpiece include an EfficientNet (ENet) semantic binary segmentation model. However, it would have been obvious to one of ordinary skill in the art to take the method of Villerup et al. and use any suitable learning model including the EfficientNet model claimed, so as to yield the predictable result of more quickly and accurately automatically processing the workpiece as desired.
Claim(s) 5-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Villerup et al. as applied to claim 4 above, and further in view of U.S. Patent Application Publication No. 2021/0204553 to Mehta et al.
Referring to claim 5, Villerup et al. does not disclose the one or more machine learning models include a workpiece classification machine learning model, and wherein the workpiece classification machine learning model includes a convolutional neural network. Mehta et al. does disclose the one or more machine learning models include a workpiece classification machine learning model – see for example paragraphs [0006], [0026], [0045]-[0047] and [0062], and wherein the workpiece classification machine learning model includes a convolutional neural network – see figures 4a-4b and paragraphs [0006], [0026], [0045]-[0047] and [0062]. Therefore it would have been obvious to one of ordinary skill in the art to take the method of Villerup et al. and add the classification learning model and neural network of Mehta et al., so as to yield the predictable result of more quickly and accurately automatically processing the workpiece as desired.
Referring to claim 6, Villerup et al. does not disclose the one or more machine learning models include an image segmentation machine learning model configured to identify features of a workpiece, and wherein the image segmentation machine learning model includes a fully convolutional network. Mehta et al. does disclose the one or more machine learning models include an image segmentation machine learning model configured to identify features of a workpiece – see for example paragraphs [0045]-[0047], and wherein the image segmentation machine learning model includes a fully convolutional network – see for example paragraphs [0045]-[0047]. Therefore it would have been obvious to one of ordinary skill in the art to take the method of Villerup et al. and add the image segmentation model and neural network of Mehta et al., so as to yield the predictable result of more quickly and accurately automatically processing the workpiece as desired.
Referring to claim 7, Villerup et al. as modified by Mehta et al. further discloses generating, with a computing device – at 920,930,932 in figure 9 of Mehta et al., at least first and second binary masks that correspond to at least first and second features of the workpiece within a single input image – see features such as category of meat, cut of meat, non-meat tissue detailed in paragraphs [0045]-[0047] of Mehta et al., executing, with a computing device – at 920,930,932, a mask combiner engine to combine the at least first and second binary masks into a single multi-class mask – see figures 4a-4b and paragraphs [0045]-[0047] of Mehta et al., and training the image segmentation machine learning model, with a computing device – at 920,930,932, using the single multi-class mask – see figures 4a-4b and paragraphs [0045]-[0050]. Therefore it would have been obvious to one of ordinary skill in the art to take the method of Villerup et al. and add the image segmentation model and neural network of Mehta et al., so as to yield the predictable result of more quickly and accurately automatically processing the workpiece as desired.
Referring to claim 8, Villerup et al. does not disclose receiving, with a computing device, images of first and second opposing surfaces of the workpiece, executing, with a computing device, an image segmentation machine learning model to generate a first output including an outline in an image of the first surface of the workpiece of at least one object or feature of the workpiece, executing, with a computing device, an image segmentation machine learning model to generate a second output including an outline in an image of the second surface of the workpiece of at least one object or feature of the workpiece, correlating, with a computing device, the at least one object or feature outlined in the image of the first surface of the workpiece with the at least one object or feature outlined in the image of the second surface of the workpiece, generating, with a computing device, a 3D model of the workpiece using the first and second outputs, the 3D model showing correlated at least one objects or features extending between the first and second opposing surfaces of the workpiece; and controlling at least one aspect of the processing of the workpiece, by a computing device, in response to the processed output. Mehta et al. does disclose receiving, with a computing device – at 134, 920,930,932, images of first and second opposing surfaces of the workpiece – at 104,204, - see via items 116-136 in figures 1a-2b, executing, with a computing device – at 920,930,932, an image segmentation machine learning model to generate a first output including an outline in an image of the first surface of the workpiece of at least one object or feature of the workpiece – see figures 4a-4b and paragraphs [0045]-[0047], executing, with a computing device – at 920,930,932, an image segmentation machine learning model to generate a second output including an outline in an image of the second surface of the workpiece of at least one object or feature of the workpiece – see figures 4a-4b and paragraphs [0045]-[0047], correlating, with a computing device – at 920,930,932, the at least one object or feature outlined in the image of the first surface of the workpiece with the at least one object or feature outlined in the image of the second surface of the workpiece – see figures 4a-4b and paragraphs [0045]-[0047], generating, with a computing device – at 920,930,932, a 3D model of the workpiece using the first and second outputs – see figures 4a-4b and paragraphs [0045]-[0047], the 3D model showing correlated at least one objects or features extending between the first and second opposing surfaces of the workpiece – see figures 4a-4b and paragraphs [0045]-[0047], and controlling at least one aspect of the processing of the workpiece – classification of the workpiece, by a computing device – at 920,930,932, in response to the processed output – see figures 4a-4b and paragraphs [0045]-[0047]. Therefore it would have been obvious to one of ordinary skill in the art to take the method of Villerup et al. and add the image segmentation model and neural network of Mehta et al., so as to yield the predictable result of more quickly and accurately automatically processing the workpiece as desired.
Referring to claim 9, Villerup et al. as modified by Mehta et al. further discloses generating, by a computing device – at 920,930,932, a 3D model of the workpiece using the first and second outputs includes, assigning X-Y coordinates to outlines of the least one objects or features extending between the first and second opposing surfaces of the workpiece – see figures 1a-4b and paragraphs [0031]-[0047] of Mehta et al., aligning the outlines of the least one objects or features extending between the first and second opposing surfaces of the workpiece – see figures 4a-4b and paragraphs [0031]-[0047] of Mehta et al., and at least one of, extrapolating the least one objects or features extending between the first and second opposing surfaces of the workpiece through a thickness of the workpiece, and extrapolating density data from the first surface of the workpiece down to the second opposing surface of the workpiece to estimate a shape of the second surface including any voids – see figures 11-12 and paragraphs [0043]-[0044] and [0072]-[0073] of Villerup et al. Therefore it would have been obvious to one of ordinary skill in the art to take the method of Villerup et al. and add the image segmentation model and neural network of Mehta et al., so as to yield the predictable result of more quickly and accurately automatically processing the workpiece as desired.
Referring to claim 10, Villerup et al. as modified by Mehta et al. further discloses the image of the first surface of the workpiece is an image of the top surface of the workpiece – see figures 1a-4b and paragraphs [0045]-[0047] of Mehta et al., but does not disclose the image of the second surface of the workpiece is an image of a top surface of a prior cut workpiece. However, it would have been obvious to one of ordinary skill in the art to take the method of Villerup et al. as modified by Mehta et al. and have the image based on a top surface of a prior workpiece as claimed, so as to yield the predictable result of more quickly and accurately automatically processing the workpiece as desired.
Referring to claim 11, Villerup et al. as modified by Mehta et al. further discloses defining for a workpiece processing system – at 1211, with a computing device – at 1108, cut paths of the workpiece based the least one objects or features – see figures 11-12 and paragraphs [0043]-[0044] and [0072]-[0073] of Villerup et al., identified in the 3D model – see figures 1a-4b and paragraphs [0031]-[0047] of Mehta et al. Therefore it would have been obvious to one of ordinary skill in the art to take the method of Villerup et al. and add the image segmentation model and neural network of Mehta et al., so as to yield the predictable result of more quickly and accurately automatically processing the workpiece as desired.
Referring to claim 12, Villerup et al. as modified by Mehta et al. further discloses executing, with a computing device – at 920,930,932, a workpiece classification machine learning model to generate at least one of a workpiece classification – see classification detailed in paragraphs [0042]-[0050] of Mehta et al., and a classification probability score of at least one possible type of workpiece for the workpiece as output using the 3D model of the workpiece as input – see figures 1-4b and paragraphs [0042]-[0050] of Mehta et al. Therefore it would have been obvious to one of ordinary skill in the art to take the method of Villerup et al. and add the image segmentation model and neural network of Mehta et al., so as to yield the predictable result of more quickly and accurately automatically processing the workpiece as desired.
Referring to claim 13, Villerup et al. as modified by Mehta et al. further discloses receiving and processing, by a computing device – at 920,930,932, the output of a classification probability score includes categorizing the workpiece based on at least one of first and second classification probability scores for the workpiece – see figures 1-4b and paragraphs [0042]-[0050] of Mehta et al., using a demand for a first type of workpiece corresponding to the first classification probability score and a demand for a second type of workpiece corresponding to the second classification probability score – see paragraphs [0049]-[0050] of Mehta et al. Therefore it would have been obvious to one of ordinary skill in the art to take the method of Villerup et al. and add the image segmentation model and neural network of Mehta et al., so as to yield the predictable result of more quickly and accurately automatically processing the workpiece as desired.
Referring to claim 14, Villerup et al. does not disclose generating, with a computing device, a classification probability score of at least one possible type of workpiece for the workpiece includes at least one of: providing a label for the at least one possible type of workpiece if the classification probability score exceeds a minimum threshold; providing a list of first and second possible types of workpieces for the workpiece based on a first and second highest classification probability scores; and providing a list of possible types of workpieces for the workpiece and corresponding classification probability scores for each type. Mehta et al. does disclose generating, with a computing device – at 920,930,932, a classification probability score of at least one possible type of workpiece for the workpiece includes at least one of: providing a label for the at least one possible type of workpiece if the classification probability score exceeds a minimum threshold – see figures 1-5 and paragraphs [0042]-[0050], providing a list of first and second possible types of workpieces for the workpiece based on a first and second highest classification probability scores – see paragraphs [0042]-[0050], and providing a list of possible types of workpieces for the workpiece and corresponding classification probability scores for each type – see figures 1-5 and paragraphs [0042]-[0050]. Therefore it would have been obvious to one of ordinary skill in the art to take the method of Villerup et al. and add the image segmentation model and neural network of Mehta et al., so as to yield the predictable result of more quickly and accurately automatically processing the workpiece as desired.
Referring to claim 15, Villerup et al. as modified by Mehta et al. further discloses receiving and processing, by a computing device – at 920.930,932, the output of a classification probability score of at least one possible type of workpiece for the workpiece including categorizing the workpiece based on at least one of the classification probability score – see figures 1-5 and paragraphs [0042]-[0050] of Mehta, and a demand for the at least one possible type of workpiece, and performing, with a workpiece processing system, at least one of cutting, portioning, trimming, sorting, and packaging the workpiece based on its categorized type – see cutting in figure 12 of Villerup et al. and – see sorting in paragraphs [0042]-[0050] of Mehta et al. Therefore it would have been obvious to one of ordinary skill in the art to take the method of Villerup et al. and add the image segmentation model and neural network of Mehta et al., so as to yield the predictable result of more quickly and accurately automatically processing the workpiece as desired.
Referring to claim 16, Villerup et al. does not disclose generating a region of interest in an image of the workpiece includes at least one of: superimposing a substantially largest inscribing circle on an image of the workpiece in a fatty region of a steak likely to include a sciatic nerve; and superimposing an outline on an image of the workpiece defining a likely peak height portion of the workpiece. Mehta et al. does disclose generating a region of interest in an image of the workpiece includes at least one of: superimposing a substantially largest inscribing circle on an image of the workpiece in a fatty region of a steak likely to include a sciatic nerve – not required by the claim given the “at least one of” phrase, and superimposing an outline on an image of the workpiece defining a likely peak height portion of the workpiece – see figures 1-4b and paragraphs [0031]-[0050]. Therefore it would have been obvious to one of ordinary skill in the art to take the method of Villerup et al. and add the image segmentation model and neural network of Mehta et al., so as to yield the predictable result of more quickly and accurately automatically processing the workpiece as desired.
Referring to claim 17, Villerup et al. as modified by Mehta et al. further discloses the one or more machine learning models configured to generate a region of interest in an image of the workpiece by superimposing a substantially largest inscribing circle on an image of the workpiece in a fatty region of a steak likely to include a sciatic nerve are trained to manage class imbalance by at least one of: weighting a positive class representing a fatty region of a steak likely to include a sciatic nerve, more than a negative class representing regions other than the fatty region of the steak likely to include the sciatic nerve, and penalizing the model when it misses the positive class; oversampling images with the positive class; and under sampling images that do not contain the positive class – not required by the claim given the “at least one of” phrase in parent claim 16.
Conclusion
8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
The following patents are cited to further show the state of the art with respect to food/meat processing devices/methods in general:
U.S. Pat. No. 4,847,954 to Lapeyre et al. – shows meat processing device
U.S. Pat. No. 5,334,084 to O’Brien et al. – shows meat processing device
U.S. Pat. No. 5,937,080 to Vogeley et al. – shows meat processing device
U.S. Pat. No. 7,210,993 to Woods et al. – shows meat processing device
U.S. Pat. No. 7,621,806 to Bottemiller et al. – shows meat processing device
U.S. Pat. No. 10,310,493 to Hocker et al. – shows meat processing device
U.S. Pat. No. 11,992,931 to Foreman et al. – shows meat processing device
U.S. Pat. No. 12,016,344 to Blaine – shows meat processing device
9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID J PARSLEY whose telephone number is (571)272-6890. The examiner can normally be reached Monday-Friday, 8am-4pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Poon can be reached at (571) 272-6891. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID J PARSLEY/Primary Examiner, Art Unit 3643