DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Comments
The Preliminary Amendments filed on March 13, 2024, and on May 6, 2024 have been entered and made of record.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “recognition section” and “imaging control section” in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 4, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Iwamoto et al. (U.S. Pub. No. 2022/0066468) in view of Fujinoi (U.S. Pub. No. 2020/0042791) and Kimura (U.S. Pub. No. 2014/0293064).
As to claim 1, Iwamoto et al. teaches a moving body (i.e., “autonomous mobile robot 10”, Paragraph [0060]) comprising:
an imaging device configured to image a recognition target (i.e., “sensor 140 is a sensor that is provided in a desired position of the autonomous mobile robot 10 and detects a marker 91 attached to the target object 90 (see FIG. 5). The sensor 140 may be, for example, a camera”, Paragraph [0065]); and
a recognition section configured to (i.e., “control apparatus 100”, Paragraph [0061]) recognize the recognition target based on imaging data of the imaging device (i.e., “the marker position specifying unit 160 performs image recognition processing on image data output from the sensor 140, thereby detecting the marker 91 and specifying its position”, Paragraph [0076]), the moving body being configured to perform a predetermined operation based on the recognition target recognized by the recognition section (See for example, “Upon determining the support position, the operation controller 161 performs control so as to support the target object 90 by the supporting part 130 … the operation controller 161 moves the autonomous mobile robot 10 in such a way that, for example, the center part of the supporting part 130 is positioned just below the marker 91. The operation controller 161 then raises the supporting part 130 … After that, the operation controller 161 moves the autonomous mobile robot 10 to a transportation destination specified in advance along with the target object 90. Lastly, the operation controller 161 lowers the supporting part 130 and puts the target object 90 on the floor surface at the specified transportation destination (transportation destination point). The transportation is thus completed”, Paragraph [0080]).
However, Iwamoto et al. does not explicitly disclose the moving body further comprising: an imaging control section configured to, when the recognition target is not recognized by the recognition section, change an imaging condition of the imaging device stepwise so that the recognition target is recognized by the recognition section, store the imaging condition when the recognition target is recognized by the recognition section in a storage device, and, when the recognition target or another recognition target is imaged by the imaging device under the same or similar situation, apply the imaging condition stored in the storage device.
Fujinoi teaches an imaging control section configured to (i.e., “CPU 111”, Paragraph [0022]) store the imaging condition when the recognition target is recognized by the recognition section (i.e., “default ISO speed is a predetermined ISO speed at which … the photographed image 151 with which the CPU 111 can recognize the markers 210 can be photographed”, Paragraph [0049]) in a storage device (i.e., “The default ISO speed may be stored in the storage device 115”, Paragraph [0051]), and, when the recognition target or another recognition target is imaged by the imaging device under the same or similar situation, apply the imaging condition stored in the storage device (See for example, steps S108 – S110, Paragraphs [0104]-[0108]; as long as the amount of light in the warehouse does not change, the same photographing condition is used.).
The combination of Iwamoto et al. and Fujinoi do not explicitly disclose the imaging control section configured to, when the recognition target is not recognized by the recognition section, change an imaging condition of the imaging device stepwise so that the recognition target is recognized by the recognition section.
Kimura et al. teaches an imaging control section configured to (i.e., “CPU 101”, Paragraph [0029]), when the recognition target is not recognized by the recognition section (i.e., “CPU 101 determines whether object recognition can be achieved”, Paragraph [0102]; and “if determined that object recognition cannot be achieved”, Paragraph [0110]), change an imaging condition of the imaging device stepwise so that the recognition target is recognized by the recognition section (i.e., “the variable aperture 31 is stopped down stage by stage to deepen the depth of field…”, Paragraph [0110]; and Paragraph [0111]).
Iwamoto et al., Fujinoi and Kimura are analogous art because they are from the field of digital image processing for object detection.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Iwamoto et al. by incorporating the imaging control section configured to, when the recognition target is not recognized by the recognition section, change an imaging condition of the imaging device stepwise so that the recognition target is recognized by the recognition section, as taught by Kimura, and configure to store the imaging condition when the recognition target is recognized by the recognition section in a storage device, and, when the recognition target or another recognition target is imaged by the imaging device under the same or similar situation, apply the imaging condition stored in the storage device, as taught by Fujinoi.
The suggestion/motivation for doing so would have been to avoid processing delays when recognizing targets, and to perform object recognition with high accuracy even when an object image formed becomes out of focus.
Therefore, it would have been obvious to combine Fujinoi and Kimura with Iwamoto et al. to obtain the invention as specified in claim 1.
As to claim 2, Iwamoto et al. does not explicitly disclose wherein the imaging control section is further configured to, in a case where the recognition target is not recognized by the recognition section even when the imaging condition stored in the storage device is applied, change the imaging condition of the imaging device so that the recognition target is recognized by the recognition section.
Fujinoi teaches the imaging control section is further configured to, in a case where the recognition target is not recognized by the recognition section even when the imaging condition stored in the storage device is applied, change the imaging condition of the imaging device so that the recognition target is recognized by the recognition section (See for example, “When the CPU 111 fails to recognize the markers 210 from the photographed image 151 in the first mode, the CPU 111 may change the mode to the second mode. That is, the CPU 111 may change the mode to the second mode in which the shutter speed is set appropriate for reading the color codes based on an error code and the like, which is output when the CPU 111 attempting to execute the reading process of the color codes with an application program fails to recognize the color codes”, Paragraph [0130]), and store the imaging condition when the recognition target is recognized by the recognition section in the storage device (i.e., “the storage device 115 stores the default shutter speed, the default F-number, and the default ISO speed in the second mode”, Paragraph [0114]).
Therefore, in view of Fujinoi and Kimura, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iwamoto et al. by incorporating the imaging control section is further configured to, in a case where the recognition target is not recognized by the recognition section even when the imaging condition stored in the storage device is applied, change the imaging condition of the imaging device so that the recognition target is recognized by the recognition section, as taught by Fujinoi, in order to avoid processing delays when recognizing targets.
As to claim 4, Iwamoto et al. does not explicitly disclose wherein the imaging condition includes at least sensitivity and an exposure time of the imaging device.
Fujinoi teaches the imaging condition includes at least sensitivity and an exposure time of the imaging device (See for example, “relationship between the ISO speed and the shutter speed (exposure period)”, Paragraph [0050]).
Therefore, in view of Fujinoi and Kimura, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iwamoto et al. by incorporating the imaging condition includes at least sensitivity and an exposure time of the imaging device, as taught by Fujinoi, in order to obtain a sufficient amount of exposure and easily recognize targets.
As to claim 10, Iwamoto et al. teaches a control method (i.e., “transport method”, Abstract) for a moving body (i.e., “autonomous mobile robot 10”, Paragraph [0060]) including an imaging device (i.e., “The sensor 140 may be, for example, a camera”, Paragraph [0065]) and a recognition section (i.e., “control apparatus 100”, Paragraph [0061]), the imaging device being configured to image a recognition target (i.e., “sensor 140 is a sensor that is provided in a desired position of the autonomous mobile robot 10 and detects a marker 91 attached to the target object 90 (see FIG. 5)”, Paragraph [0065]), and the recognition section being configured to recognize the recognition target based on imaging data of the imaging device (i.e., “the marker position specifying unit 160 performs image recognition processing on image data output from the sensor 140, thereby detecting the marker 91 and specifying its position”, Paragraph [0076]).
However, Iwamoto et al. does not explicitly disclose the control method comprising: changing, when the recognition target is not recognized by the recognition section, an imaging condition of the imaging device stepwise so that the recognition target is recognized by the recognition section, and storing the imaging condition when the recognition target is recognized by the recognition section in a storage device; and applying, when the recognition target or another recognition target is imaged by the imaging device under the same or similar situation, the imaging condition stored in the storage device.
Fujinoi teaches a control method comprising: storing the imaging condition when the recognition target is recognized by the recognition section (i.e., “default ISO speed is a predetermined ISO speed at which … the photographed image 151 with which the CPU 111 can recognize the markers 210 can be photographed”, Paragraph [0049]) in a storage device (i.e., “The default ISO speed may be stored in the storage device 115”, Paragraph [0051]); and applying, when the recognition target or another recognition target is imaged by the imaging device under the same or similar situation, the imaging condition stored in the storage device (See for example, steps S108 – S110, Paragraphs [0104]-[0108]; as long as the amount of light in the warehouse does not change, the same photographing condition is used.).
The combination of Iwamoto et al. and Fujinoi do not explicitly disclose the changing, when the recognition target is not recognized by the recognition section, an imaging condition of the imaging device stepwise so that the recognition target is recognized by the recognition section.
Kimura teaches changing, when the recognition target is not recognized by the recognition section (i.e., “CPU 101 determines whether object recognition can be achieved”, Paragraph [0102]; and “if determined that object recognition cannot be achieved”, Paragraph [0110]), an imaging condition of the imaging device stepwise so that the recognition target is recognized by the recognition section (i.e., “the variable aperture 31 is stopped down stage by stage to deepen the depth of field…”, Paragraph [0110]; and Paragraph [0111]).
Therefore, in view of Fujinoi and Kimura, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iwamoto et al. by incorporating the changing, when the recognition target is not recognized by the recognition section, of an imaging condition of the imaging device stepwise so that the recognition target is recognized by the recognition section, as taught by Kimura, and storing the imaging condition when the recognition target is recognized by the recognition section in a storage device, and applying, when the recognition target or another recognition target is imaged by the imaging device under the same or similar situation, the imaging condition stored in the storage device, as taught by Fujinoi, in order to avoid processing delays when recognizing targets, and to perform object recognition with high accuracy even when an object image formed becomes out of focus.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Iwamoto et al. in view of Fujinoi and Kimura as applied to claim 1 above, and further in view of Elazary et al. (U.S. Pub. No. 2017/0225891). The teachings of Iwamoto et al., Fujinoi and Kimura have been discussed above.
As to claim 3, Iwamoto et al., Fujinoi and Kimura do not explicitly disclose wherein the moving body is configured to estimate a self-location based on the recognition target recognized by the recognition section.
Elazary et al. teaches a moving body (i.e., “robot 41”, Paragraph [0038]) that is configured to estimate a self-location based on the recognition target (i.e., “marker”, Paragraph [0052]) recognized by the recognition section (i.e., “the position of the robot within a three-dimensional space can be determined directly from one or more markers before or adjacent to the robot”, Paragraph [0052]; and Paragraph [0053]).
Iwamoto et al., Fujinoi, Kimura and Elazary et al. are analogous art because they are from the field of digital image processing for object detection.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to further modify Iwamoto et al., Fujinoi and Kimura by incorporating the moving body is configured to estimate of a self-location based on the recognition target recognized by the recognition section, as taught by Elazary et al.
The suggestion/motivation for doing so would have been to efficiently navigate to the location of an item anywhere within a distribution site, precisely, and in a continuous and uninterrupted manner.
Therefore, it would have been obvious to combine Elazary et al. with Iwamoto et al., Fujinoi and Kimura to obtain the invention as specified in claim 3.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Iwamoto et al. in view of Fujinoi and Kimura as applied to claim 1 above, and further in view of Meier et al. (U.S. Pub. No. 2015/0306763). The teachings of Iwamoto et al., Fujinoi and Kimura have been discussed above.
As to claim 5, Iwamoto et al., Fujinoi and Kimura do not explicitly disclose wherein the imaging condition when the recognition target is recognized by the recognition section is shared by another moving body.
Meier et al. teaches an imaging condition when the recognition target is recognized by the recognition section is shared by another moving body.
Iwamoto et al., Fujinoi and Kimura are analogous art because they are from the filed of digital image processing for object detection (i.e., “learning store may provide a format converter between robotic contexts of the same channel that differ by resolution of learned features and image features … the converter may use interpolation, up-sampling and/or down-sampling, super resolution, density estimation techniques, and/or other operations to approximate the experience that another robot would have had, had it been in the same context of the robot that actually recorded the context action sample”, Paragraph [0106]; Paragraph [0112]; “When the source comprises another robotic device(s), a connection to the device (via, e.g., USB, or a wireless link) may be established”, Paragraph [0166]; “may desire to add new vision processing functionality”, Paragraph [0168]; and “In some implementations, the image may correspond to the robotic brain image from another robotic device that has been previously trained”, Paragraph [0169]).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to further modify Iwamoto et al., Fujinoi and Kimura by incorporating the imaging condition when the recognition target is recognized by the recognition section is shared by another moving body, as taught by Meier et al.
The suggestion/motivation for doing so would have been to allow other devices to perform tasks requiring vision processing.
Therefore, it would have been obvious to combine Meier et al. with Iwamoto et al., Fujinoi and Kimura to obtain the invention as specified in claim 5.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Iwamoto et al. in view of Fujinoi and Kimura as applied to claim 1 above, and further in view of Sonoura et al. (U.S. Pub. No. 2020/0150664). The teachings of Iwamoto et al., Fujinoi and Kimura have been discussed above.
As to claim 6, Iwamoto et al., Fujinoi and Kimura do not explicitly disclose wherein the moving body is configured to recognize the recognition target attached to a wheeled platform, and the moving body is configured to support and convey the wheeled platform.
Sonoura et al. teaches a moving body (i.e., “unmanned transport vehicle 1”, Paragraph [0026]) that is configured to recognize the recognition target attached to a wheeled platform (i.e., “a mark such as a cart number attached to each cart 90 may be acquired by performing image processing and the type of cart 90 may be identified by the mark such as the cart number”, Paragraph [0121]), and the moving body is configured to support and convey the wheeled platform (See for example, Paragraphs [0069]-[0070]; and “when the cart 90 is moved by the unmanned transport vehicle 1”, Paragraph [0082]).
Iwamoto et al., Fujinoi, Kimura and Sonoura et al. are analogous art because they are from the field of digital image processing for object detection.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to further modify Iwamoto et al., Fujinoi and Kimura by incorporating the moving body is configured to recognize the recognition target attached to a wheeled platform, and the moving body is configured to support and convey the wheeled platform, as taught by Sonoura et al.
The suggestion/motivation for doing so would have been to improve the efficiency of a transport movement operation when a cage cart is moved.
Therefore, it would have been obvious to combine Sonoura et al. with Iwamoto et al., Fujinoi and Kimura to obtain the invention as specified in claim 6.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Iwamoto et al. in view of Fujinoi and Kimura as applied to claim 1 above, and further in view of Park (U.S. Pub. No. 2021/0405646). The teachings of Iwamoto et al., Fujinoi and Kimura have been discussed above.
As to claim 7, Iwamoto et al., Fujinoi and Kimura do not explicitly disclose wherein the moving body is configured to travel on a route to a target point while recognizing multiple recognition targets arranged inside and/or outside a building.
Park teaches a moving body (i.e., “cart-robot 100”, Paragraph [0039]) that is configured to travel on a route to a target point (i.e., “the cart-robot 100 travels along the marker when returning to the return place with a storage station after the charging of the cart-robot 100 is completed”, Paragraph [0140]) while recognizing multiple recognition targets arranged inside and/or outside a building (See for example, “the controller 250 analyzes the image photographed by the camera sensor 260 and calculates the moving direction or moving speed of the cart-robot that moves along the marker or a path where the plurality of markers are disposed and controls the mover 190 to move the cart-robot into a space indicated by a marker”, Paragraph [0059]).
Iwamoto et al., Fujinoi, Kimura and Park are analogous art because they are from the field of digital image processing for object detection.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to further modify Iwamoto et al., Fujinoi and Kimura by incorporating the moving body is configured to travel on a route to a target point while recognizing multiple recognition targets arranged inside and/or outside a building, as taught by Park.
The suggestion/motivation for doing so would have been to improve user convenience, and increase efficiency by allowing autonomous travelling when charging the moving body.
Therefore, it would have been obvious to combine Park with Iwamoto et al., Fujinoi and Kimura to obtain the invention as specified in claim 7.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Iwamoto et al. in view of Fujinoi and Kimura as applied to claim 1 above, and further in view of Chen et al. (U.S. Pub. No. 2021/0072763). The teachings of Iwamoto et al., Fujinoi and Kimura have been discussed above.
As to claim 8, Iwamoto et al., Fujinoi and Kimura do not explicitly disclose wherein the moving body is configured to recognize the recognition target attached to an operator or another moving body and move following the operator or the other moving body.
Chen et al. teaches a moving body (i.e., “automated guided vehicle (AGV) 100”, Paragraph [0032]) that is configured to recognize the recognition target attached to an operator or another moving body (i.e., “optical marker 511, which may comprise a special coating or special color, is physically attached to the shirt of the person to the left in image 505. Similarly, optical markers 512 and 513 are physically attached to the back of the vest of the person to the right in image 505”, Paragraph [0048]) and move following the operator or the other moving body (i.e., “The user or operator of AGV100 is able to use the supervisor program 412 to force an association of targets tagged by markers 511-513 into selected categories, such as recognized human, human to be followed, obstacles, and the like”, Paragraph [0049]).
Iwamoto et al., Fujinoi, Kimura and Chen et al. are analogous art because they are from the field of digital image processing for object detection.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to further modify Iwamoto et al., Fujinoi and Kimura by incorporating the moving body is configured to recognize the recognition target attached to an operator or another moving body and move following the operator or the other moving body, as taught by Chen et al.
The suggestion/motivation for doing so would have been to provide an autonomous moving body that can be programmed to follow targets in an efficient manner.
Therefore, it would have been obvious to combine Chen et al. with Iwamoto et al., Fujinoi and Kimura to obtain the invention as specified in claim 8.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Iwamoto et al. in view of Fujinoi and Kimura as applied to claim 1 above, and further in view of Sadamoto et al. (U.S. Pub. No. 2022/0080966). The teachings of Iwamoto et al., Fujinoi and Kimura have been discussed above.
As to claim 9, Iwamoto et al., Fujinoi and Kimura do not explicitly disclose multiple mecanum wheels configured to be rotationally driven by respective corresponding electric motors.
Sadamoto et al. teaches multiple mecanum wheels (i.e., “the drive system includes four mecanum wheels 11a to 11d”, Paragraph [0034]) configured to be rotationally driven by respective corresponding electric motors (i.e., “four drive motors 12a to 12d”, Paragraph [0034]).
Iwamoto et al., Fujinoi, Kimura and Sadamoto et al. are analogous art because they are from the field of digital image processing for object detection.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to further modify Iwamoto et al., Fujinoi and Kimura by incorporating the multiple mecanum wheels configured to be rotationally driven by respective corresponding electric motors, as taught by Sadamoto et al.
The suggestion/motivation for doing so would have been to instantly perform forward movement, change of direction, movement to a sideway, and turning on the spot without a preparation operation.
Therefore, it would have been obvious to combine Sadamoto et al. with Iwamoto et al., Fujinoi and Kimura to obtain the invention as specified in claim 9.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSE M TORRES whose telephone number is (571)270-1356. The examiner can normally be reached Monday thru Friday; 10:00 AM to 6:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSE M TORRES/Examiner, Art Unit 2664 01/20/2026
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664