DETAILED ACTION
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
Limitations
Claims
Interpretation/Support
“a trigger detection unit that . . .”
1
Trigger detection unit 211 as illustrated in figures 2, 5 and as described in paragraphs 0041, 0042
“a setting table that . . .”
1, 4
Setting table 212 as illustrated in figures 2, 5 and as described in paragraphs 0040, 0043
“a determination result acquisition unit that . . .”
1, 2, 5
Determination result acquisition unit 215 as illustrated in figures 2, 5 and as described in paragraph 0061
“an imaging environment adjustment unit that . . .”
1, 6
Imaging environment adjustment unit 216 as illustrated in figures 2, 5 and as described in paragraph 0062
“an accepting unit that . . .”
3
Accepting unit 218 as illustrated in figure 5 and as described in paragraph 0083
“an operation state acquisition unit that . . .”
5
Operation state acquisition unit 213 as illustrated in figures 2, 5 and as described in paragraph 0046
“an image acquisition unit that . . .”
7
Image acquisition unit 217 as illustrated in figures 2, 5 and as described in paragraph 0063
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 8 is rejected under 35 U.S.C. 101 because the applicant has provided evidence that the applicant intends the term "computer readable storage medium” to include non-statutory matter. The applicant describes a computer readable storage medium as including open ended language and thus it is reasonable to interpret it to include all possible mediums, including non-statutory mediums (see paragraph 0020). The words "storage" and/or "recording" are insufficient to convey only statutory embodiments to one of ordinary skill in the art absent an explicit and deliberate limiting definition or clear differentiation between storage media and transitory media in the disclosure. As such, the claim is drawn to a form of energy. Energy is not one of the four categories of invention and therefore this claim is not statutory. Energy is not a series of steps or acts and thus is not a process. Energy is not a physical article or object and as such is not a machine or manufacture. Energy is not a combination of substances and therefore not a composition of matter.
The Examiner suggests amending the claim to read as a “non-transitory computer readable storage medium”.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-8 are rejected under 35 U.S.C. 102(a)(1) as being unpatentable by
U.S. Patent Application Publication No. 2007/0073439 (Habibi) (cited by Applicant).
Claim 1:
The cited prior art describes an imaging environment adjustment device comprising: (Habibi: “This disclosure generally relates to machine vision, and more particularly, to visual tracking systems using image capture devices.” Paragraph 0003; “In a further embodiment, the vision tracking system advantageously addresses the problems of occlusion and/or focus by controlling the position and/or orientation of one or more cameras independently of the robotic device.” Paragraph 0031; see the processor 302 and memory 304 as illustrated in figure 3)
a trigger detection unit that detects at least one type of trigger; (Habibi: “The process starts at block 902, which corresponds to either of the ending blocks of FIG. 7 (block 716) or FIG. 8 (block 816). Accordingly, the robot controller 116 has received the processor signal 118 from transducer 114 based upon the emulated output signal 110 communicated from the vision tracking system 100 (FIG. 1), or the robot controller 116 has received an emulated processor signal 202 directly communicated from the vision tracking system 100 (FIG. 2).” Paragraph 0152)
a setting table that stores at least one environment setting item defining an imaging environment in a machining machine; (Habibi: see the logic 314 stored in the memory 304 as illustrated in figure 3 and as described in 0110, 0111; “In some embodiments, known occlusions may be communicated to the vision tracking system 100. Such occlusions may be predicted based upon information available to or known by the robot controller 116, or the occlusions may be learned from prior robotic operations.” Paragraph 0104; “Logic 314 may include one or more algorithms to predict the occurrence of an occlusion. For example, if one or more portions of the manipulators 410, 412 are detected as they come into the field of view 124, the algorithm may determine that an occlusion event will occur in the future, based upon knowledge of where the workpiece 104 currently is, and will be in the future, in the workspace geometry. As another example, the relative positions of the workpiece 104 and robotic device 114 or portions thereof may be learned, known or predefined over the period of time that the workpiece 104 is in the workspace geometry.” Paragraph 0110; “Logic 314 resides in or is implemented in the memory 304.” Paragraph 0070)
a determination result acquisition unit that acquires a determination result as to whether or not to adjust the imaging environment into a control state indicated by the at least one environment setting item; and (Habibi: see the occlusion detection as described in paragraphs 0099, 0100, 0103, 0104; “Upon detection of the occlusion (determination of an occlusion in the occlusion region 502), the vision tracking system 100 adjusts movement of the image capture device 120 to eliminate or minimize the occlusion. For example, in response to the vision tracking system 100 detecting an occlusion event, the image capture device 120 may be moved backward, stopped or decelerated to avoid or mitigate the effect of the occlusion.” Paragraph 0099; “Detection of occlusion events are determined upon analysis of captured image data.” Paragraph 0103; “In some embodiments, known occlusions may be communicated to the vision tracking system 100. Such occlusions may be predicted based upon information available to or known by the robot controller 116, or the occlusions may be learned from prior robotic operations.” Paragraph 0104)
an imaging environment adjustment unit that, when the trigger detection unit detects the at least one type of trigger and the determination result acquired by the determination result acquisition unit indicates to adjust the imaging environment, adjusts the imaging environment into the control state indicated by the at least one environment setting item. (Habibi: see the vision tracking system 100 including the movement of the image capture device 120 based on the signal from the vision tracking system 100 (i.e., trigger) and the occlusion detection (i.e., determination result) as illustrated in figures 5A, 5B, 9 and as described in paragraphs 0152, 0153; “At block 908, in response to occlusion events, position of the image capture device 120 is further adjusted to avoid or mitigate the effect of occlusion events.” Paragraph 0153)
Claim 2:
The cited prior art describes the imaging environment adjustment device according to claim 1 further comprising
a determination unit that determines whether or not to adjust the imaging environment into the control state indicated by the at least one environment setting item, (Habibi: see the occlusion detection as described in paragraphs 0099, 0100, 0103, 0104; “Upon detection of the occlusion (determination of an occlusion in the occlusion region 502), the vision tracking system 100 adjusts movement of the image capture device 120 to eliminate or minimize the occlusion. For example, in response to the vision tracking system 100 detecting an occlusion event, the image capture device 120 may be moved backward, stopped or decelerated to avoid or mitigate the effect of the occlusion.” Paragraph 0099; “Detection of occlusion events are determined upon analysis of captured image data.” Paragraph 0103; “In some embodiments, known occlusions may be communicated to the vision tracking system 100. Such occlusions may be predicted based upon information available to or known by the robot controller 116, or the occlusions may be learned from prior robotic operations.” Paragraph 0104; “Logic 314 (FIG. 3) includes one or more algorithms that then identify the above-described occurrence of occlusion events.” Paragraph 0109)
wherein the determination result acquisition unit acquires the determination result from the determination unit. (Habibi: “FIG. 9 is a flowchart illustrating an embodiment of a process for moving position of the image capture device 120 (FIG. 1) so that the position is approximately maintained relative to the movement of workpiece 104.” Paragraph 0152; “At block 812, the emulated processor signal 202 is communicated to the robot controller 116.” Paragraph 0151)
Claim 3:
The cited prior art describes the imaging environment adjustment device according to claim 1 further comprising
an accepting unit that accepts input of the determination result indicating whether or not to adjust the imaging environment into the control state indicated by the at least one environment setting item, (Habibi: see the occlusion detection as described in paragraphs 0099, 0100, 0103, 0104; “Upon detection of the occlusion (determination of an occlusion in the occlusion region 502), the vision tracking system 100 adjusts movement of the image capture device 120 to eliminate or minimize the occlusion. For example, in response to the vision tracking system 100 detecting an occlusion event, the image capture device 120 may be moved backward, stopped or decelerated to avoid or mitigate the effect of the occlusion.” Paragraph 0099; “Detection of occlusion events are determined upon analysis of captured image data.” Paragraph 0103; “In some embodiments, known occlusions may be communicated to the vision tracking system 100. Such occlusions may be predicted based upon information available to or known by the robot controller 116, or the occlusions may be learned from prior robotic operations.” Paragraph 0104)
wherein the determination result acquisition unit acquires the determination result from the accepting unit. (Habibi: “FIG. 7 is a flowchart illustrating an embodiment of a process for emulating the output of an electromechanical movement detection system such as a shaft encoder. The process begins at block 702. At block 704, a plurality of images of a feature 108 (FIG. 1) corresponding to a workpiece 104 are captured by the vision tracking system 100.” Paragraph 0140; “FIG. 9 is a flowchart illustrating an embodiment of a process for moving position of the image capture device 120 (FIG. 1) so that the position is approximately maintained relative to the movement of workpiece 104.” Paragraph 0152)
Claim 4:
The cited prior art describes the imaging environment adjustment device according to claim 1,
wherein the at least one type of trigger includes multiple types of triggers, and (Habibi: see the emulated output signal 110 and the emulated processor signal 202 as described in paragraph 0152; “The process starts at block 902, which corresponds to either of the ending blocks of FIG. 7 (block 716) or FIG. 8 (block 816). Accordingly, the robot controller 116 has received the processor signal 118 from transducer 114 based upon the emulated output signal 110 communicated from the vision tracking system 100 (FIG. 1), or the robot controller 116 has received an emulated processor signal 202 directly communicated from the vision tracking system 100 (FIG. 2).” Paragraph 0152)
wherein the setting table stores the at least one environment setting item in association with each of the multiple types of triggers. (Habibi: see the logic 314 stored in the memory 304 for processing the emulated output signal 110 and the emulated processor signal 202 as illustrated in figure 3 and as described in 0110, 0111; “In some embodiments, known occlusions may be communicated to the vision tracking system 100. Such occlusions may be predicted based upon information available to or known by the robot controller 116, or the occlusions may be learned from prior robotic operations.” Paragraph 0104; “As noted above, some embodiments of logic 314 contain conversion information such that the determined position, velocity and/or acceleration information can be converted into information corresponding to the above described output signal of a shaft encoder or the signal of another electro-mechanical movement detection device.” Paragraph 0076; “Logic 314 may include one or more algorithms to predict the occurrence of an occlusion. For example, if one or more portions of the manipulators 410, 412 are detected as they come into the field of view 124, the algorithm may determine that an occlusion event will occur in the future, based upon knowledge of where the workpiece 104 currently is, and will be in the future, in the workspace geometry. As another example, the relative positions of the workpiece 104 and robotic device 114 or portions thereof may be learned, known or predefined over the period of time that the workpiece 104 is in the workspace geometry.” Paragraph 0110; “Logic 314 resides in or is implemented in the memory 304.” Paragraph 0070)
Claim 5:
The cited prior art describes the imaging environment adjustment device according to claim 1 further comprising
an operation state acquisition unit that acquires operation information indicating an operation state of a machining machine stored in association with the at least one environment setting item applied when the trigger detection unit detects the at least one type of trigger, (Habibi: see the belt 112 (i.e., table) as illustrated in figure 1; “Alternatively, the vision tracking system 100 may be configured to track movement of the belt 112 or another component whose movement is relatable to the speed of the belt 112 and/or workpiece 104 using machine-vision techniques, and to determine an emulated encoder output signal 110.” Paragraph 0054; “FIG. 7 is a flowchart illustrating an embodiment of a process for emulating the output of an electromechanical movement detection system such as a shaft encoder. The process begins at block 702. At block 704, a plurality of images of a feature 108 (FIG. 1) corresponding to a workpiece 104 are captured by the vision tracking system 100.” Paragraph 0140)
wherein the determination result acquisition unit acquires the determination result determined based on the operation information. (Habibi: “FIG. 7 is a flowchart illustrating an embodiment of a process for emulating the output of an electromechanical movement detection system such as a shaft encoder. The process begins at block 702. At block 704, a plurality of images of a feature 108 (FIG. 1) corresponding to a workpiece 104 are captured by the vision tracking system 100.” Paragraph 0140)
Claim 6:
The cited prior art describes the imaging environment adjustment device according to claim 5,
wherein the operation information includes position information indicating a position of at least any one of a tool spindle, a table, a robot, and a loader, and (Habibi: see the belt 112 (i.e., table) as illustrated in figure 1; “Alternatively, the vision tracking system 100 may be configured to track movement of the belt 112 or another component whose movement is relatable to the speed of the belt 112 and/or workpiece 104 using machine-vision techniques, and to determine an emulated encoder output signal 110.” Paragraph 0054)
wherein the imaging environment adjustment unit moves at least any one of the tool spindle, the table, the robot, and the loader to a retracted position based on the position information. (Habibi: see the movement of the image capture device 120 (i.e., robot) based on the signal from the vision tracking system 100 (i.e., trigger) and the occlusion detection (i.e., determination result) as illustrated in figures 5A, 5B, 9 and as described in paragraphs 0152, 0153; “At block 908, in response to occlusion events, position of the image capture device 120 is further adjusted to avoid or mitigate the effect of occlusion events.” Paragraph 0153)
Claim 7:
The cited prior art describes the imaging environment adjustment device according to claim 1 further comprising an image acquisition unit that acquires an image in the imaging environment adjusted by the imaging environment adjustment unit. (Habibi: “The vision tracking system 100 comprises an image capture device 120 (also referred to herein as a camera).” Paragraph 0056)
Claim 8:
The cited prior art describes a computer readable storage medium storing an instruction that causes a computer to perform: (Habibi: “This disclosure generally relates to machine vision, and more particularly, to visual tracking systems using image capture devices.” Paragraph 0003; “In a further embodiment, the vision tracking system advantageously addresses the problems of occlusion and/or focus by controlling the position and/or orientation of one or more cameras independently of the robotic device.” Paragraph 0031; see the processor 302 and memory 304 as illustrated in figure 3)
detecting at least one type of trigger, (Habibi: “The process starts at block 902, which corresponds to either of the ending blocks of FIG. 7 (block 716) or FIG. 8 (block 816). Accordingly, the robot controller 116 has received the processor signal 118 from transducer 114 based upon the emulated output signal 110 communicated from the vision tracking system 100 (FIG. 1), or the robot controller 116 has received an emulated processor signal 202 directly communicated from the vision tracking system 100 (FIG. 2).” Paragraph 0152)
acquiring a determination result as to whether or not to adjust an imaging environment into a control state indicated by at least one environment setting item defining the imaging environment in a machining machine; and (Habibi: see the occlusion detection as described in paragraphs 0099, 0100, 0103, 0104; “Upon detection of the occlusion (determination of an occlusion in the occlusion region 502), the vision tracking system 100 adjusts movement of the image capture device 120 to eliminate or minimize the occlusion. For example, in response to the vision tracking system 100 detecting an occlusion event, the image capture device 120 may be moved backward, stopped or decelerated to avoid or mitigate the effect of the occlusion.” Paragraph 0099; “Detection of occlusion events are determined upon analysis of captured image data.” Paragraph 0103)
when the at least one type of trigger is detected and the acquired determination result indicates to adjust the imaging environment, adjusting the imaging environment into the control state indicated by the at least one environment setting item. (Habibi: see the movement of the vision tracking system 100 including the image capture device 120 based on the signal from the vision tracking system 100 (i.e., trigger) and the occlusion detection (i.e., determination result) as illustrated in figures 1, 5A, 5B, 9 and as described in paragraphs 0152, 0153; “At block 908, in response to occlusion events, position of the image capture device 120 is further adjusted to avoid or mitigate the effect of occlusion events.” Paragraph 0153)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Patent Application Publication No. 2021/0255599 describes a machine tool operation monitoring system.
U.S. Patent Application Publication No. 2016/0085232 describes a numerical control device to calculate an approach path.
U.S. Patent No. 11,636,382 describes a robotic self programming visual inspection.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER E EVERETT whose telephone number is (571)272-2851. The examiner can normally be reached Monday-Friday 8:00 am to 5:00 pm (Pacific).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Fennema can be reached at 571-272-2748. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Christopher E. Everett/Primary Examiner, Art Unit 2117