DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined
under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 5, 6, 8-14 and 16-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (abstract idea) without significantly more.
Under Step 1 of the 2019 Revised Patent Subject Matter Eligibility Guidance, the claims are directed to a process (claim 1, a method) or a machine (claim 16, a system), which are statutory categories.
However, evaluating claim 1, under Step 2A, Prong One, the claim is directed
to the judicial exception of an abstract idea using the grouping of a mathematical relationship/mental process. The limitations include:
verifying the assembly of the connector based on the at least one audio signal.
The claim is directed to the abstract idea of collecting and analyzing information and making a determination, namely capturing an audio signal and verifying an assembly based on the audio signal, which falls within the judicial exceptions for mental process and methods of organizing human activity (i.e., observation and evaluation of weather an assembly is proper), a task that can be performed by human listening for sound indicative of a proper connection.
Next, Step 2A, Prong Two evaluates whether additional elements of the claim “integrate the abstract idea into a practical application” in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. The claim does not recite additional elements that integrate the judicial exception into a practical application.
The additional elements of “capturing at least one audio signal” is considered insignificant extra-solution activity of collecting data that is not sufficient to integrate the claim into a particular practical application. The act of data gathering is considered insufficient to elevate the claim to a practical application.
The additional element of “from within a footprint of an assembly line during the assembly of the connector” merely apply the abstract idea in a particular technological environment and use generic sensing to observe a physical process, which does not integrate the abstract idea into a practical application or improve the functioning of a computer, sensor, or other technology.
Therefore, the claims are directed to an abstract idea.
At Step 2B, consideration is given to additional elements that may make the abstract idea significantly more. Under Step 2B, there are no additional elements that make the claim significantly more than the abstract idea.
The claim does not include any non-conventional or non-generic elements, specific signal processing, or technical improvement beyond the abstract idea itself.
The limitations have been considered individually and as a whole and do not amount to significantly more than the abstract idea itself.
Accordingly, claim 1 does not amount to significantly more than the abstract idea and is therefore considered not eligible under 35 USC § 101.
Dependent claims 2-3 and 8-14 do not add anything which would render the claimed invention a patent eligible application of the abstract idea. The claims merely extend (or narrow) the abstract idea which do not amount for "significant more" because they merely add details to the algorithm which forms the abstract idea as discussed above.
Regarding claim 2, the claim merely limits the abstract idea to real-time verification during electric-vehicle battery assembly, which is a field-of-use and timing limitation. Such limitations do not integrate the abstract idea into practical application or add an inventive concept. Therefore, the claim does not amount to significantly more than the abstract idea and is therefore considered not eligible under 35 USC § 101.
Regarding claim 11, the additional element “training a learning model based on features extracted from a training dataset that includes signatures of properly mated connectors and signatures of background noise” is considered performing mathematical calculation which falls within the “mathematical concept” grouping of abstract ideas (see Example 47, in the 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence).
Regarding claims 12-14, the additional elements “a verification signal”, “providing feedback to an operator”, and “controlling an output device” to provide different indications based on the verification result merely present information to a user and constitute post-solution activity. The limitations do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea itself.
Claim 16 is rejected 35 USC § 101 for the same rationale as in claim 1.
Dependent claims 17-20 do not add anything which would render the claimed invention a patent eligible application of the abstract idea. The claims merely extend (or narrow) the abstract idea which do not amount for "significant more" because they merely add details to the algorithm which forms the abstract idea as discussed above.
Claims 4, 7 and 15 are considered eligible under 35 USC 101.
Regarding claim 4, the additional element “moving the audio sensing device
towards the position of the operator, in response to determining that the position of the operator is misaligned with the position of the audio sensing device” integrate the abstract idea into a practical application by actively controlling a physical sensor based on real-world conditions, thereby improving the operation of the verification system in an assembly environment.
Regarding claim 7, the additional element “the plurality audio sensing devices
being spaced apart from each other at different locations within the footprint of the assembly line; and fusing the plurality of audio signals to generate a final audio signal, wherein the verifying verifies the assembly of the connector based on the final audio signal” integrate the abstract idea into a practical application by improving the operation of the sensing system in an assembly environment.
Regarding claim 15, the additional elements “controlling a collaborative robot to
automatically perform the assembly of the connector based on data captured from a guidance sensor; and controlling the collaborative robot to reassemble the connector, in response to the verifying indicating that the assembly of the connector is an improper assembly” integrate the abstract idea into a practical application. The recited verification is integrated into a manufacturing process that physically manipulates components via robotic control. As such, the claim applies the abstract idea into a practical application that improves the operation of the verification system in an assembly environment.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that
form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1 and 16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Walker et al. (Pub. No. US 2017/0092099) (hereinafter Walker).
As per claim 1 and 16, Walker teaches capturing at least one audio signal from within a footprint of an assembly line during the assembly of the connector (see abstract, ¶¶ [0014], [0022], [0024] and [0055], i.e., acoustic data resulting from the connection of the first connector and the second connector is received by a microphone during manufacturing or assembly process, ¶¶ [0002]-[0003], [0028] and [0040]-[0041], i.e., the microphone operates at an assembly station of the assembly line, including line-side and pokayoke environments, such as that the acoustic data is captured from within assembly environment during connector assembly); and verifying the assembly of the connector based on the at least one audio signal (see ¶¶ [0004], [0014], [0026]-[0027] and [0056]-[0057] i.e., analyzing the captured acoustic data to determine whether a predetermined acoustic signature indicative of a proper connection is present, and generating a confirmation or rejection signal indicating whether the connector assembly is proper or not).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over
Walker in view of Matsuda (Pub. No.US 2015/0251716) and further in view of Donahue et al. (Patent. No. US 6,523,417) (hereinafter Donahue).
As per claims 2 and 17, Walker teaches the system as stated above. However, Walker fails to explicitly teach verifying the assembly in real time while a battery for an electric vehicle that includes the connector is being assembled on the assembly line.
Matsuda, however, teaches an electric vehicle manufacturing environment in which a battery unit including connectors is assembled and incorporated into an electric vehicle as part of a production workflow (see ¶¶ [0033]-[0035], [0059], [0062] and [0065]).
Donahue teaches performing verification at an end-of-assembly-line station and further teaches real-time verification during assembly-line testing, including monitoring component operation and assembly conditions as the product moves through manufacturing (see abstract, i.e., “verify the internal testing of the seat adjustment mechanisms, both as to static adjusted positions and real time adjustment motion”, col. 1, lines 6-9 i.e., “equipment for end-of-assembly line testing…”, col. 5, lines 53-54, i.e., “capable of measuring or responding to changes in distance between them in real time”, col. 10, line 63 through col. 11, line 8, i.e., “the sensor array 100 monitors seat movement in real time…”). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to apply the known real-time, assembly-line verification techniques of Donahue (i.e., using sensor array at a test station to measure component position and motion in real time and use that sensed motion/position to verify adjustment/function during assembly-line testing, into the audio-based connection/assembly verification of Walker in the electric vehicle battery assembly environment of Matsuda because Donahue teaches monitoring/measuring a product’s physical motion/position at an end-of-line station in real time provides an objective verification of functional performance during the test cycle (beyond mere static checks) (see col. 3, lines 1-11 and col. 7, lines 42-45), thereby improving the reliability and completeness of in-line/end-of-line verification by confirming proper operation as assembly is actuated/moved at the station (with predictable results).
Claims 3 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over
Walker in view of Yamana et al. (Pub. No. US 2023/0251668) (hereinafter Yamana).
As per claim 3, Walker teaches the system as stated above. However, Walker fails to teach tracking a relative position of an operator and the audio sensing device within a defined footprint.
Yamana, however, teaches a position of a user within a bounded operational area associated with an autonomous vehicle, including determining a direction and location of a user based on sensor data (including microphone data) acquired within a defined measurement range or footprint of the vehicle (see ¶¶ [0003], [0006], [0109]-[0110] and [0138]-[0139]). It would have been obvious to one having ordinary skill in the art before the effective filling date f the claimed invention to modify the system of Walker to incorporate the operator-position tracking techniques of Yamana because tracking the relative position of the operator within a bounded footprint enables the audio sensing device to be spatially correlated with the operator during the assembly operation, improving capture of assembly-related audio signals in a noisy environment, thereby improving the reliability and accuracy of real-time assembly verification without altering the fundamental operation of the audio-based verification system taught by Walker.
As per claim 4, the combination of Walker and Yamana teach the system as stated above. However, Walker fails to teach detecting a position of the operator within the footprint; determining whether the position of the operator is aligned with a position of the audio sensing device; and moving the audio sensing device towards the position of the operator, in response to determining that the position of the operator is misaligned with the position of the audio sensing device.
Yamana, however, further teaches detecting and tracking a position of a user within a bounded operational area using sensors (see ¶ [0010], i.e., “periodically calculate its own position and orientation in the predetermined space 100 based on at least one of a measurement result measured by the LIDAR device 212, a color image captured by the front RGB camera 221, or a range image captured by the ToF camera 222 ”, ¶ [0138], i.e., “the autonomous vehicle 120 can analyze the audio data detected by the microphones 301 to 304 to determine the direction in which the voice of the user 110 was emitted (the direction in which the user 110 is present)”, determining relative positional relationship between the user and a sensing system (see ¶ [0153], i.e.,” the autonomous vehicle 120 may estimate the position where the user 110 is highly likely to be present based on the information stored in the memory in step S1102. Further, the autonomous vehicle 120 may identify, on the environment map, the coordinates that indicate positions near the estimated position. The information stored in the memory in step S1102 may include the coordinates indicating the position of the autonomous vehicle 120 on the environment map, the information indicating the orientation of the autonomous vehicle 120, and the determination result regarding the direction in which the user 110 is present”), and controlling movement of a system based on the detected user position within that bounded area (see ¶¶ [0109]-[0112], e.g., “When the conveyance controller 833 is notified about the completion of the docking from the docking controller 823, the conveyance controller 833 may execute control to move the autonomous vehicle 120 based on the coordinates indicating the conveyance destination position notified from the conveyance destination position identification unit 832”). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the audio-based assembly verification system of Walker to incorporate the user-position detection and motion control techniques of Yamana because Yamana teaches detecting and tracking a user’s position within a predetermined, mapped operational space, determining relative positional relationships between the user and a system, and controlling system movement based on that detected user position, a person of ordinary skill in the art would have recognized that applying such user-relative positioning and movement control to reposition sensing components within a constrained footprint would predictably facilitate effective sensing and interaction during operation.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Walker in
view of Yamana and further in view of Salume et al. (Pub. No. US 2018/0332420) (hereinafter Salume).
As per claim 5, the combination of Walker and Yamana teach the system as stated above except that the detecting the position of the operator comprises: continually detecting the position of the operator.
Salume, however, teaches continual position tracking of the user during operation (see ¶ [0061], i.e., “the current location of the user, including the detected position and orientation of the head of the user, may be continually tracked…”, ¶ [0065], i.e., “the method 600 may continue to iterate…and thus continue to track a current location of the user…”). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the combined Walker and Yamana system so that detecting the operator position comprises continually detecting the operator position because Salume teaches that continual position tracking using sensors and repeated update/iteration is a known technique for maintaining accurate, up-date user location during movement (see ¶¶ [0061] and [0065]), thereby maintaining alignment/repositioning of the audio sensing device as operator moves and ensuring consistent capture of the assembly sound for verification.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Walker in
view of Casari et al. (Pub. No. US 2018/0356281) (hereinafter Casari).
As per claim 15, Walker teaches the system as stated above. However, Walker fails to teach controlling a collaborative robot to automatically perform the assembly of the connector based on data captured from a guidance sensor; and controlling the collaborative robot to reassemble the connector, in response to the verifying indicating that the assembly of the connector is an improper assembly.
Casari, however, teaches these features. In particular, Casari explicitly discloses that “audio transducer 118 or the coupling device 106 may also be mounted on or connected to a robotic end effector performing the coupling of the first connector 102 and the second connector 104” and that “In some embodiments, the velocity of the first connector 102 may be determined from control systems for a robotic end effector” (see ¶¶ [0023] and [0025]). Casari further discloses that, when analysis of the captured audio signal indicates that the output was unsuccessful, “the coupling is determined to be unsuccessful…an option to restart is provided” (see ¶¶ [0028] and [0033]). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to configure the assembly verification system of Walker to control a collaborative robot to automatically perform the assembly of the connector and to control the robot to reassemble the connector in response to the verification indicating an improper assembly, because Casari demonstrates that robotic and effectors are commonly used to perform connector coupling operations with audio-based verification and automated retry upon detection of an improper coupling, thereby improving assembly reliability, reducing manual intervention, and ensuring consistent connector mating on assembly line.
Examiner’s Notes
Claims 6, 8-14 and 18-20 distinguish over the prior art.
Regarding claim 6, none of the prior art of record teaches or fairly suggests a method of verifying an assembly of a connector, the method comprising: wherein the determining whether the position of the operator is aligned with the position of the audio sensing device comprises: calculating offsets for linear actuators associated with the audio sensing device based on the relative position of the operator and the audio sensing device, in combination with the rest of the claim limitations as claimed and defined by the applicant.
Regarding claim 8, none of the prior art of record teaches or fairly suggests a method of verifying an assembly of a connector, the method comprising: wherein the verifying the assembly of the connector comprises: determining at least one region of interest in the at least one audio signal; extracting features from the at least one audio signal; and predicting whether the assembly, in combination with the rest of the claim limitations as claimed and defined by the applicant.
Regarding claim 18, none of the prior art of record teaches or fairly suggests a system configured to verify an assembly of a connector, the system comprising: wherein the controller is configured to verify the assembly by: determining at least one region of interest in the at least one audio signal, extracting features from the at least one audio signal, and predicting whether the assembly of the connector is a proper assembly or an improper assembly based on the extracted features and training data, in combination with the rest of the claim limitations as claimed and defined by the applicant.
Allowable Subject Matter
Claim 7 is objected to as being dependent upon a rejected base claim, but would
be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 7, none of the prior art of record teaches or fairly suggests a method of verifying an assembly of a connector, the method comprising: wherein the capturing of the at least one audio signal comprises: capturing a plurality of audio signals from a plurality of audio sensing devices, respectively, during the assembly of the connector, the plurality audio sensing devices being spaced apart from each other at different locations within the footprint of the assembly line; and fusing the plurality of audio signals to generate a final audio signal, wherein the verifying verifies the assembly of the connector based on the final audio signal, in combination with the rest of the claim limitations as claimed and defined by the applicant.
Prior art
The prior art made record and not relied upon is considered pertinent to
applicant’s disclosure:
Thomas [‘158] discloses a method for monitoring position of objects, one embodiment of the invention can, for example, include at least the acts of: affixing a mobile computing device to an object to be monitored; periodically activating at least a portion of the mobile computing device to determine its location; subsequently transmitting the location to a web server through at least in part a wireless network; and displaying the location of the object to a monitoring party via the monitoring parties access to the web server.
Wexler et al. [‘291] discloses a wearable apparatus for processing an audio signal may include at least one microphone configured to capture the audio signal from an environment of a user of the wearable apparatus and at least one processor programmed to: analyze the audio signal to identify an audio trigger; after identifying the audio trigger, store a portion of the audio signal containing a target audio segment related to the audio trigger; and determine an action by analyzing the portion of the audio signal.
Oswald et al. [‘529] discloses a system for providing spatialized audio in a vehicle, includes: a vehicle orientation sensor outputting a vehicle orientation signal and being disposed on the vehicle; and a controller configured to receive a user orientation signal output from a user orientation sensor being disposed on a wearable that, during use, moves with a first user's head, wherein the controller is further configured to determine an orientation of the user's head relative to the vehicle based, at least, on a difference between the vehicle orientation signal and the user orientation signal, the controller being further configured to output to a first binaural device, according to the orientation of the user's head relative to the vehicle, a first spatial audio signal, such that the first binaural device produces a first spatial acoustic signal perceived by the user as originating from a first virtual source location within a cabin of the vehicle.
Breed [‘299] discloses ensor assembly fixed to the frame and each including a sensor arranged to obtain data about a condition or property of the vehicle or part thereof or an environment in or around the vehicle, and a wireless transmission component coupled to the sensor for wirelessly transmitting a signal derived from the data obtained by the sensor, a receiver fixed to the frame arranged to receive signals from the wireless transmission component, and a reactive component for performing an action based on the data obtained by the sensor and transmitted from the wireless transmission component to the receiver. The data can be displayed as an indication to the driver or other occupant of the vehicle, relayed the data to a remote location for analysis or response and/or used to determine adjustment or control a component in the vehicle.
Contact information
Any inquiry concerning this communication or earlier communications from the
examiner should be directed to MOHAMED CHARIOUI whose telephone number is (571)272-2213. The examiner can normally be reached Monday through Friday, from 9 am to 6 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Schechter can be reached on (571) 272-2302. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Mohamed Charioui
/MOHAMED CHARIOUI/Primary Examiner, Art Unit 2857