DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “first imaging device to capture”, “second imaging device to capture” and “processing device to operate” in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim 7 recites a limitation of the form “at least one of A and B” in line 3. In accordance with the U.S. Court of Appeals for the Federal Circuit in SuperGuide Corp v. DirecTV Enterprises, Inc., these limitations are conjunctive in nature and to be construed as “at least one of A and at least one of B”. Therefore, these claims are addressed herein as requiring each of these steps rather than the alternative of A or B.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-9 and 11-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
This analysis is based on the 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence (2024 AI SME Update) published on July 17, 2024 (89 FR 58128).
With regard to claims 1-9 and 11-13:
Step 1:
Claims 1-9 and 11-13 are directed to an apparatus, method or non-transitory computer-executable medium which fall under the statutory categories of invention of machines. Therefore, step 1 is met.
Step 2A, Prong 1:
Claims 1 and 13 recite “circuitry” and “processor” to “determine whether the subject is a battery-containing product”. The limitation, excluding the circuitry/processor, therefore falls within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. Under its broadest reasonable interpretation when read in light of the specification, the determining encompasses mental processes practically performed in the human mind. See MPEP 2106.04(a)(2), subsection III.
Dependent claims 2 and 5-8 add limitations that may be practically performed in the human mind using observation, evaluation, judgment, and opinion. For example, in determination of target candidates, a person is able to observe whether the captured images exhibit visual characteristics indicative of a battery-containing product or not and thereby indicate so based on that determination.
Dependent claims 3, 4 and 9 further clarify the configuration of the imaging devices without adding any limitations on the determination functions of claim 1. Therefore, they do not resolve the issues of the claims being abstract.
Dependent claim 11 adds a robot based on the determination result without adding any limitations on the determination functions of claim 1. Therefore this does not resolve the issues of the claims being abstract.
Step 2A, Prong 2:
The limitations of claims 1-9 and 11-13 are recited as being performed by “circuitry” and “processor”. The processor/circuitry is recited at a high level of generality. The processor/circuitry is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f), which provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception.
In evaluation of whether the invention integrates into a practical application, it should be clear that the claimed invention improves the functioning of a computer or improves another technology or technical field. To evaluate an improvement to a computer or technical field, the specification must set forth an improvement in technology and the claim itself must reflect the disclosed improvement. See MPEP 2106.04(d)(1) and 2106.05(a).
According to the specification, the ability to efficiently and accurately identify a battery-containing product can be achieved by using a projection mapping. None of these elements are explicitly recited in the claims, with the exception of claim 10, which explains the absence of a rejection for that specific claim.
Step 2B:
In claims 1, 12 and 13, the limitations of capturing/acquiring the surface image and internal image amount to merely receiving data. In claim 1, the limitation of operating based on the determination result amounts to merely outputting data. These limitations are considered to be insignificant extra-solution activity. In consideration, these limitations are further evaluated to take into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). Receiving and presenting data is very well understood and routine in the field and therefore these do not add an inventive concept to the claims.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3, 5, 6, 12 and 13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Geurts (WO2021/209614).
Regarding claim 1, Bourely et al. discloses an object processing apparatus, comprising:
a first imaging device to capture a surface of a subject to obtain a surface image (“Secondly, additionally or alternatively, a 3D laser triangulation unit can be utilized to measure the shape of the object at high resolution (e.g. sub-mm accuracy). This allows for additional information to complement the one gathered from DE-XRT, such as 3D shape and volume. Thirdly, additionally or alternatively, a RGB detector may be used, which allows to differentiate the components in the material stream regarding color and shape.” at page 32, line 9);
a second imaging device to capture an internal object in the subject to obtain an internal image (“Data acquisition can be performed in different ways. The sensory system may include various sensors. In an example, data with respect to the material properties of the particles in the material stream (e.g. waste stream) is gathered by means of a multi-sensor characterization device. Firstly, dual-energy X-ray transmission (DE-XRT) may allow to see “through” the material and to determine certain material properties such as average atomic number and density. The advantage is that one can inspect the complete volume and not only the surface of the component (e.g. waste material is often dirty and surface properties are therefore not necessarily representative for the bulk of the material) at page 32, line 1);
circuitry configured to determine whether the subject is a battery-containing product that has a built-in battery (“Optionally, the material stream is selected from a group consisting of solid waste, produced products, agricultural products, or batteries” at page 13, line 19) based on the surface image and the internal image to generate a determination result (“In some examples, the above mentioned sensors are used together” at page 32, line 15; the data from all the sensors are used to determine the material stream composition); and
a processing device to operate based on the determination result (“For instance, the system can be configured to perform waste characterization, wherein the system allows for efficient further training of the employed machine learning model. Additionally, in some examples, the system may also be configured to perform sorting of materials based on the waste characterization” at page 24, line 21).
Regarding claim 2, Geurts discloses an apparatus wherein the circuitry is further configured to determine whether the subject is the battery-containing product based on information obtained by integrating a feature of the surface image and a feature of the internal image (“predicting one or more labels and associated label probabilities for each of the unknown components 3i in the material stream 3 by means of a machine learning model which is configured to receive as input the imaging of the material stream 3 and/or one or more features of the unknown components extracted from the imaging of the material stream 3” at page 20, line 6; “Optionally, image processing can be used for segmenting the images into individual components. From these segmented images, various features describing the object's shape may be computed. Examples are the area, eccentricity and perimeter of a component. In some examples, this can be done for all images obtained from all sensors. Various neural network models and/or neural network architectures can be used. A neural network has the ability to process, e.g. classify, sensor data and/or pre-processed data, cf. determined features characteristics of the segmented objects” at page 32, line 16).
Regarding claim 3, Geurts discloses an apparatus wherein the circuitry is further configured to control the first imaging device and the second imaging device to cause a position where the subject is imaged in the surface image and a position where the subject is imaged in the internal image to match each other (“Optionally, data from different subsystems of the sensory system is aligned prior to determining characterizing features for each of the one or more segmented objects” at page 13, line 1; “In this way, the sensory unit 5 provides a plurality of images which can be aligned and/or fused, for instance by a computer unit 13. Aligning and/or fusing of the imaging data obtained from different camera's/detectors can enable a better determination of the features/characteristics of the segmented objects” at page 28, line 22).
Regarding claim 5, Geurts discloses an apparatus wherein
the circuitry is further configured to:
determine whether the subject is a first target candidate based on the surface image to generate a first determination result (“The one or more materials are segmented and the individual segmented objects 3i are analyzed for determining relevant features/characteristics thereof” at page 28, line 26; the image data from the first sensor type is analyzed by the machine learning system);
determine whether the internal object is a second target candidate based on the internal image to generate a second determination result (accordingly, the data from the second type of sensor is also analyzed by the machine learning system); and
in a case that the first determination result indicates that the subject is not the first target candidate or in a case that the second determination result indicates that the internal object is not the second target candidate, determine that the subject is not the battery-containing product (“Fig. 5 shows distributions of features for different component classes. The components 3i in the material stream 3 can be sorted into different classes: for example paper, wood, glass, stones, ferrous metals (ferro) and non-ferrous metals (non-ferro). Exemplary classes are provided in fig. 5. The machine learning model can be a classification model that is configured to learn to differentiate between these different classes” at page 25, line 22; therefore, if either or both sensor types indicate the object is not of a particular class, e.g. battery, the object is identified as such).
Regarding claim 6, Geurts discloses an apparatus wherein
the circuitry is further configured to:
determine whether the subject is a first target candidate based on the surface image to generate a first determination result (“The one or more materials are segmented and the individual segmented objects 3i are analyzed for determining relevant features/characteristics thereof” at page 28, line 26; the image data from the first sensor type is analyzed by the machine learning system);
determine whether the internal object is a second target candidate based on the internal image to generate a second determination result (accordingly, the data from the second type of sensor is also analyzed by the machine learning system); and
in a case that the first determination result indicates that the subject is the first target candidate or in a case that the second determination result indicates that the internal object is the second target candidate, determine that the subject is the battery-containing product (“Fig. 5 shows distributions of features for different component classes. The components 3i in the material stream 3 can be sorted into different classes: for example paper, wood, glass, stones, ferrous metals (ferro) and non-ferrous metals (non-ferro). Exemplary classes are provided in fig. 5. The machine learning model can be a classification model that is configured to learn to differentiate between these different classes” at page 25, line 22; therefore, if either or both sensor types indicate the object is of a particular class, e.g. battery, the object is identified as such).
Regarding claim 12, Geurts discloses an object determination method, comprising:
acquiring a surface image in which a surface of a subject appears (“Secondly, additionally or alternatively, a 3D laser triangulation unit can be utilized to measure the shape of the object at high resolution (e.g. sub-mm accuracy). This allows for additional information to complement the one gathered from DE-XRT, such as 3D shape and volume. Thirdly, additionally or alternatively, a RGB detector may be used, which allows to differentiate the components in the material stream regarding color and shape.” at page 32, line 9);
acquiring an internal image in which an internal object in the subject appears (“Data acquisition can be performed in different ways. The sensory system may include various sensors. In an example, data with respect to the material properties of the particles in the material stream (e.g. waste stream) is gathered by means of a multi-sensor characterization device. Firstly, dual-energy X-ray transmission (DE-XRT) may allow to see “through” the material and to determine certain material properties such as average atomic number and density. The advantage is that one can inspect the complete volume and not only the surface of the component (e.g. waste material is often dirty and surface properties are therefore not necessarily representative for the bulk of the material) at page 32, line 1); and
determining whether the subject is a battery-containing product that has a built-in battery (“In some examples, the above mentioned sensors are used together” at page 32, line 15; the data from all the sensors are used to determine the material stream composition) based on the surface image and the internal image (“In some examples, the above mentioned sensors are used together” at page 32, line 15; the data from all the sensors are used to determine the material stream composition).
Regarding claim 13, Geurts discloses a non-transitory computer-executable medium storing a plurality of instructions which, when executed by a processor, causes the processor to perform a method (“processor, a computer readable storage medium, a sensory system, and a separator unit, wherein the computer readable storage medium has instructions stored which, when executed by the processor, result in the processor performing operations” at page 15, line 13) comprising:
acquiring a surface image in which a surface of a subject appears (“Secondly, additionally or alternatively, a 3D laser triangulation unit can be utilized to measure the shape of the object at high resolution (e.g. sub-mm accuracy). This allows for additional information to complement the one gathered from DE-XRT, such as 3D shape and volume. Thirdly, additionally or alternatively, a RGB detector may be used, which allows to differentiate the components in the material stream regarding color and shape.” at page 32, line 9);
acquiring an internal image in which an internal object in the subject appears (“Data acquisition can be performed in different ways. The sensory system may include various sensors. In an example, data with respect to the material properties of the particles in the material stream (e.g. waste stream) is gathered by means of a multi-sensor characterization device. Firstly, dual-energy X-ray transmission (DE-XRT) may allow to see “through” the material and to determine certain material properties such as average atomic number and density. The advantage is that one can inspect the complete volume and not only the surface of the component (e.g. waste material is often dirty and surface properties are therefore not necessarily representative for the bulk of the material) at page 32, line 1); and
determining whether the subject is a battery-containing product that has a built-in battery (“In some examples, the above mentioned sensors are used together” at page 32, line 15; the data from all the sensors are used to determine the material stream composition) based on the surface image and the internal image (“In some examples, the above mentioned sensors are used together” at page 32, line 15; the data from all the sensors are used to determine the material stream composition).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 8 is rejected under 35 U.S.C. 103 as being unpatentable over Geurts.
Geurts discloses an apparatus wherein the circuitry is further configured to:
determine whether the subject is a second target candidate based on the internal image to generate a second determination result (“The one or more materials are segmented and the individual segmented objects 3i are analyzed for determining relevant features/characteristics thereof” at page 28, line 26; the image data from the second sensor type is analyzed by the machine learning system); and
in a case that the second determination result indicates that the subject is the second target candidate, determine that the subject is the battery-containing product (“Fig. 5 shows distributions of features for different component classes. The components 3i in the material stream 3 can be sorted into different classes: for example paper, wood, glass, stones, ferrous metals (ferro) and non-ferrous metals (non-ferro). Exemplary classes are provided in fig. 5. The machine learning model can be a classification model that is configured to learn to differentiate between these different classes” at page 25, line 22).
Geurts does not explicitly disclose first determining whether the subject is a first target candidate based on the surface image to generate a first determination result and only in a case that the first determination result indicates that the subject is the first target candidate, determining whether the subject is a second target candidate based on the internal image to generate a second determination result.
However, conditional serial processing is well-known in the art and able to be implemented by cascaded machine learning processes. As such, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to employ the conditional processing in the system of Geurts to eliminate the need to process the object with both image data types if it is probable that the candidate can be eliminated from contention with a single image type analysis.
Claim(s) 4 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Geurts (US 2023/0169751) and Bourely et al. (US 2013/0141115).
Geurts discloses an apparatus further comprising a conveyor to convey the subject along a conveyance path (“Optionally, the material stream is moved on a conveyor, wherein the material stream is scanned by means of the sensory system for characterization of objects in the material stream” at page 13, line 7), wherein the first imaging device and the second imaging device are disposed at different positions (implied that the use of multiple sensor types requires placement at different positions).
Geurts does not explicitly disclose that the circuitry is further configured to control the first imaging device and the second imaging device to cause the second imaging device to image the internal image at a time different from a time when the first imaging device images the surface image.
Bourely et al. teaches an apparatus in the same field of endeavor of waste sorting, wherein the circuitry is further configured to control the first imaging device (“The surface analysis system 4 can use, depending on the targeted application, a UV/visible, infrared spectroscopy optical analysis and/or a thermographic analysis” at paragraph 0050, line 1) and the second imaging device to cause the second imaging (“The volume analysis can be based on the use of a hyperfrequency system 5 (FIG. 2) that enables to analyze the object 2 in its entire thickness. The object 2 is illuminated by a beam of hyperfrequency waves emitted by antennas 8, preferably of the cone type, held up by a support 8'. The wave is propagated then from the emitting antenna array 8 to the receiving antenna array 9. When the object 2 passes into the zone 18, it changes the amplitude and the phase of the hyperfrequency waves picked up by the antenna array 9” at paragraph 0053, line 1) device to image the internal image at a time different from a time when the first imaging device images the surface image (looking at Figure 1, the object is moved via conveyor through the first sensor area at numeral 4 and then through second sensor area at numeral 5, thereby indicating different imaging timings).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a serial imaging configuration as taught by Bourely et al. in the system of Geurts to ensure non-interference between imaging types.
Claim(s) 9 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Geurts and Saeedkia (US 2014/0367316).
Geurts discloses an apparatus as described in claim 1 as described above.
Geurts does not explicitly disclose that the second imaging device includes a terahertz sensor to generate terahertz waves and to generate the internal image based on a reflected wave reflected by the internal object among the terahertz waves.
Saeedkia teaches an apparatus in the same field of endeavor of material determination and sorting wherein the second imaging device includes a terahertz sensor to generate terahertz waves and to generate the internal image based on a reflected wave reflected by the internal object among the terahertz waves (“According to some embodiments, there is a method of identifying materials. The method comprises transmitting a terahertz wave for interaction with an object. The interaction results in a resulting terahertz wave that is influenced by the object. The method also comprises receiving the resulting terahertz, wave, generating measurement data based on the resulting terahertz wave received, calculating an object response signature based on the measurement data, and comparing the object response signature to a set of known response signatures so as to identify the object” at paragraph 0021).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a tetrahertz sensor as taught by Saeedkia in the second sensor system of Geurts as an additional way of characterizing the object material for further discrimination (see Saeedkia at paragraph 0008).
Claim(s) 10 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Geurts and Hotte et al. (US 2016/0078678).
Geurts discloses an apparatus as described in claim 1 as described above.
Geurts does not explicitly disclose a projector, and the circuitry is further configured to: calculate a position where the subject is present based on the surface image or the internal image; and in a case that the determination result indicates that the subject is the battery-containing product, control the projector to project video indicating an area where the subject is present onto the subject based on the calculated position.
Hotte et al. teaches an apparatus in the same field of endeavor of material determination and sorting, wherein
the processing device includes a projector (“Turning back to FIG. 1 in view of FIG. 3, the system 10 further includes a projector 70” at paragraph 0034, line 1), and
the circuitry is further configured to:
calculate a position where the subject is present based on the surface image or the internal image (“In the example shown in FIG. 2, the object position tracking data correspond to position coordinates X.sub.T, Y.sub.L for the article 12′ as located within the working area 20 at a specific time” at paragraph 0033, second to last sentence); and
in a case that the determination result indicates that the subject is the object of interest, control the projector to project video indicating an area where the subject is present onto the subject based on the calculated position (“As shown in FIGS. 1 and 3, the projector 74 is linked to the computer 40 through data line 76 to receive the object tracking task instruction data. Corresponding to the final method step 55 shown in the flowchart of FIG. 5, the projector 70 is operated for directing light according to the object tracking task instruction data onto the object as it moves through the working zone 20 to provide a visual instruction for the operator about the task to perform on the object” at paragraph 0034, line 9).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the projector system as taught by Hotte et al. in the system of Geurts to assist the operator in identifying and isolating the object of interest.
Claim(s) 11 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Geurts and Doublet et al. (US 2016/0228920).
Geurts discloses an apparatus as described in claim 1 as described above.
Geurts does not explicitly disclose a robot, and the circuitry is further configured to: calculate a position where the subject is present based on the surface image or the internal image; and in a case that the determination result indicates that the subject is the battery-containing product, control the robot to move the subject based on the calculated position.
Doublet et al. teaches an apparatus in the same field of material determination and sorting, wherein
the processing device includes a robot (“Advantageously, the means for removing the unitary object from the receiving zone are chosen from among a belt conveyor, a mechanical robot, a vibrating table, or the same mechanical robot used for displacing a unitary object from the zone of vision towards the receiving zone” at paragraph 0094), and
the circuitry is further configured to:
calculate a position where the subject is present based on the surface image or the internal image (“The device according to the invention further comprises a mechanical robot provided with at least one gripping member that makes it possible, in a first step, to grip an object contained in the pile present beforehand in the zone of vision, with each object of the pile being defined by one or several gripping zones, and in a second step to displace the gripped object from the zone of vision to another zone, called a receiving zone” at paragraph 0076); and
in a case that the determination result indicates that the subject is the battery-containing product (“The objects of the pile that can be sorted by the method according to the invention are for example household waste whether or not organic, electronic waste” at paragraph 0033, line 1; electronic waste notably includes objects containing batteries), control the robot to move the subject based on the calculated position (the determine object type dictates which pile the object is to be sorted to by means of the robot).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the robot sorting as taught by Doublet et al. in the system of Geurts to allow the object to be expeditiously sorted to its respective material section.
Allowable Subject Matter
Claim 7 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: the prior art does not utilize both the surface and internal images to determine a first classification, and if positive, only then utilize the surface and internal images again to determine a second classification to then determine the object contains a battery as required by claim 7.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATRINA R FUJITA whose telephone number is (571)270-1574. The examiner can normally be reached Monday - Friday 9:30-5:30 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 5712723638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATRINA R FUJITA/Primary Examiner, Art Unit 2672