Prosecution Insights
Last updated: April 19, 2026
Application No. 18/770,309

ACTIVE SOUND MANAGEMENT SYSTEM IN A WORK MACHINE

Non-Final OA §103
Filed
Jul 11, 2024
Examiner
AZIZ, SHEZA ABDUL
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Deere & Company
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
6 currently pending
Career history
6
Total Applications
across all art units

Statute-Specific Performance

§101
20.0%
-20.0% vs TC avg
§103
65.0%
+25.0% vs TC avg
§102
5.0%
-35.0% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation are: "a sound sensor", "a sound component identification system", "a component processing system", and a "signal generator", in claim 14, "a machine noise identifier" in claim 15, "an acoustic characteristic identifier" in claim 16, "a human voice identifier" in claim 17, "object sensor" in claim 19, and the "sound sensor", "sound component identification system", and "control signal generator" in claim 20. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim [1, 2, 3, 4, 5, 6, 7, 8, 13, 14, 15 18,20] are rejected under 35 U.S.C. 103 as being unpatentable over Reona (JP2019007139A)in view of Di Censo (US_9716939). Regarding claim 1, Reona teaches a computer implemented method, comprising: sensing sound on a work machine –[0016 “ In the aspect of the invention, it is preferable that the sensor unit includes a plurality of different sensors for each of the plurality of types of non-image information. Since a plurality of different types of sensors are provided, for example, a plurality of types of non-image information can be acquired with high accuracy by using sensors such as engine sound microphones, attachment peripheral sound microphones, and vibration sensors”] However, Reona does not teach generating a sound signal based on the sensed sound; identifying a component of the sound in the sound signal; performing a sound management action based on the identified component to obtain a modified sound signal; and generating sound with an operator interface subsystem based on the modified sound signal. But Di Censo teaches generating a sound signal based on the sensed sound – [Column 1, lines 44-46 – “Embodiments according to the present disclosure include a system and method for generating an auditory environment for a user that may include receiving a signal representing an ambient auditory environment of the user” identifying a component of the sound in the sound signal - [Column 1, lines 46-50 –“processing the signal using a microprocessor to identify at least one of a plurality of types of sounds in the ambient auditory environment, receiving user preferences corresponding to each of the plurality of types of sounds, modifying the signal for each type of sound in the ambient auditory environment”] performing a sound management action based on the identified component to obtain a modified sound signal- [Column 1, lines 50-52 “modifying the signal for each type of sound in the ambient auditory environment based on the corresponding user preference”]. and generating sound with an operator interface subsystem based on the modified sound signal-[Column 1, lines 52-54 “and outputting the modified signal to at least one speaker to generate the auditory environment for the user”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo because using the sound processing method of Di Censo with the work machine would help with sounds sensed around the work machine. They could be analyzed, classified into sound types, selectively modified and delivered to the operator. Such a modification would improve the work machine system by enabling the system to identify important auditory events such as alarms, warning signals, or human voices. It would help attenuate irrelevant noise, enhance critical sounds needed for safe machine operation, and provide the operator with a clearer and more controlled auditory environment during machine operation. Regarding claim 2, Reona does not teach the computer implemented method of claim 1 wherein performing a sound management action comprises: identifying the sound management action based on the identified component; and generating a control signal to execute the sound management action. But Di Censo teaches the computer implemented method of claim 1 wherein performing a sound management action comprises: identifying the sound management action based on the identified component - [Column 10, lines 60-67. Column 11 lines 1-12 “ In operation, a representative embodiment of a system or method as illustrated in FIG. 3, for example, generates a customized or personalized user controllable auditory environment based on sounds from the ambient auditory environment by receiving a signal representing the sounds in the ambient auditory environment of the user from one or more microphones 312. DSP 310 processes the signal using a microprocessor to identify at least one of a plurality of types of sounds in the ambient auditory environment. DSP 310 receives user preferences 322 corresponding to each of the plurality of types of sounds and modifies the signal for each type of sound in the ambient auditory environment based on the corresponding user preference. The modified signal is output to amp(s) 314 and speaker(s) 316 to generate the auditory environment for the user. DSP 310 may receive a sound signal from an external device or source 340 in communication with DSP 310 via wired or wireless network 342. The received signal or data from the external device 340 ( or database 350) is then combined with the modified types of sound by DSP 310”]. and generating a control signal to execute the sound management action- [Column 11 lines 5-12 “The modified signal is output to amp(s) 314 and speaker(s) 316 to generate the auditory environment for the user. DSP 310 may receive a sound signal from an external device or source 340 in communication with DSP 310 via wired or wireless network 342. The received signal or data from the external device 340 ( or database 350) is then combined with the modified types of sound by DSP 310”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo so that sounds detected around the machine could be analyzed and classified into sound types and a corresponding signal could be generated to trigger an action to notify the operator. This would improve machine operation ,thereby improving safety, responsiveness, and operator awareness in noisy work machine environments. Regarding claim 3, Reona does not teach the computer implemented method of claim 1 wherein identifying the sound component comprises: identifying, as the sound component, machine sound generated by the work machine, wherein performing the sound management action comprises performing sound reduction to reduce the machine sound in the sound signal to obtain the modified sound signal. But Di Censo teaches the computer implemented method of claim 1 wherein identifying the sound component comprises: identifying, as the sound component, machine sound generated by the work machine, wherein performing the sound management action comprises performing sound reduction to reduce the machine sound in the sound signal to obtain the modified sound signal -[Column 9 lines 12-31 “For embodiments having intra-aural or circumoral earpieces, external sounds from the ambient auditory environment are passively attenuated before reaching the eardrums directly. These embodiments acoustically isolate the user by mechanically preventing external sound waves from reaching the ear drums. In these embodiments, the default auditory scene that the user hears without active or powered signal modification is silence or significantly reduced or muffled sounds, regardless of the actual external sounds. For the user to actually hear anything from the ambient auditory environment, the system has to detect external sounds with one or more microphones and deliver them to one or more inward-facing speakers so that they are audible to the user in the first place. Lowering or cancelling sound events may be accomplished primarily on a signal processing level. The external sound scene is analyzed, and-given the user preferences-is modified (processed) and then played back to the user through one or more inwards facing loudspeakers”]; [Column 13, lines 16-24 “In the same or other embodiments, a user may specify the sound pressure level at which a particular sound is to be produced for the user. For example, the user may specify that an alarm clock sound is to be produced at 80 dBA SPL, while a partner's alarm clock is to be produced at 30 dBA SPL. In response, the DSP 310 (FIG. 3) may increase the loudness of the user's alarm (e.g., from 60 dBA SPL to 80 dBA SPL) and reduce the loudness of the user's alarm (e.g. ,from 60 dBA SPL to 30 dBA SPL)”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo because incorporating the sound reduction technique into the work machine system would help so that sounds detected around the work machine could be selectively reduced or cancelled, while allowing important sounds such as warnings, alarms, or operator communications to remain audible. This improves the usability and safety of the work machine system. Regarding claim 4, Reona does not teach the computer implemented method of claim 3 wherein performing sound reduction comprises performing sound cancellation to remove the machine sound from the sound signal. But Di Censo teaches the computer implemented method of claim 3 wherein performing sound reduction comprises performing sound cancellation to remove the machine sound from the sound signal – [Column 9, lines 34-48 “In embodiments having supra-aural earpieces or other wearable speakers and microphones including above ear devices ( e.g., traditional hearing aid), external sound is still able to reach the ear drums, so the default perceived auditory scene is mostly equivalent to the actual ambient auditory scene. In these embodiments, to lower or cancel a specific external sound event, the system has to create an active inverted sound signal to counteract the actual ambient sound signal. The cancellation signal is generated out of phase with the ambient signal sound signal so the inverted sound signal and ambient sound signal combine and cancel one another to remove ( or lower toward zero) the specific sound event. Note that adding and enhancing sound events as represented by blocks 244 and 246 is done in the same way in both strategies with the sound event to be enhanced or added played back on the inward facing loudspeakers”]; [Column 8, lines 51-67, Column 9 lines 1-7 “ User preferences captured by the user interface are communicated to the wearable device as represented by block 230. In some embodiments, the user interface is integrated within the user device such that communication is via a program module, message, or similar strategy. In other embodiments, a remote user interface may communicate over a local or wide area network using wired or wireless communication technology. The received user preferences are applied to associated sounds within the ambient auditory environment as represented by block 240. This may include cancellation 242 of one or more sounds, addition or insertion 244 of one or more sounds, enhancement 246 of one or more sounds, or attenuation 248 of one or more sounds. The modified sounds are then provided to one or more speakers associated with or integrated with the wearable device. Additional processing of the modified sounds may be performed to virtually locate the sound( s) within the auditory environment of the user using stereo or multiple speaker arrangements as generally understood by those of skill in the art. Modification of one or more types or categories of sounds received by one or more ambient microphones of the wearable device in response to associated user preferences continues until the user preferences change as represented by block 250”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo because machine generated noise could be selectively cancelled or reduced from the audio signal, thereby allowing other sounds in the environment to be more clearly detected or communicated. This improves the processing of audio signals in noise machine environments. Regarding claim 5, Reona does not teach the computer implemented method of claim 1 wherein identifying the sound component comprises: identifying as the sound component a sound indicative of a characteristic of the work machine, wherein performing the sound management action comprises inserting an alert in the sound signal based on the characteristic of the machine to obtain the modified sound signal. But Di Censo teaches the computer implemented method of claim 1 wherein identifying the sound component comprises: identifying as the sound component a sound indicative of a characteristic of the work machine, wherein performing the sound management action comprises inserting an alert in the sound signal based on the characteristic of the machine to obtain the modified sound signal – [Column 5, lines 19-34 “Similar to the stored sounds or representative signals described above, alerts 106 may originate within the ambient auditory environment of user 120 and be detected by an associated microphone, or may be directly transmitted to system 100 using a wireless communication protocol such as Wi-Fi, Bluetooth, or cellular protocols. For example, a regional weather alert or Amber alert may be transmitted and received by system 100 and inserted or added to the auditory environment of the user. Depending on the particular implementation, some alerts may be processed based on user preferences, while other alerts may not be subject to various types of user preferences, such as cancellation or attenuation, for example. Alerts may include context-sensitive advertisements, announcements, or information, such as when attending a concert, sporting event, or theater, for example”]; [Column 11, 36-44 As previously described, this may include increasing level or volume, decreasing level or volume, canceling a particular sound, replacing a sound with a different sound (a combination of cancelling and inserting/adding a sound), or changing various qualities of a sound, such as equalization, pitch, etc., as represented by block 444. Desired sounds may be added or mixed with the sounds from the ambient auditory environment modified in response to the user preferences 322 and/or context sensors 330”]; [Column 10, lines 51-59 “As previously described, context-sensitive sounds or data streams representing sounds may be provided from an associated audio source 340, such as a music player, an alert US 2015/0195641 Al broadcaster, a stadium announcer, a store or theater, etc. Streaming data may be provided directly from audio source 340 to DSP 310 via a cellular connection, Bluetooth, or WiFi, for example. Data streaming or downloads may also be provided over a local or wide area network 342, such as the internet, for example”] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo so that the system could insert alert sounds into the audio signals delivered to the operator when particular events occur, such as machine status changes, safety warnings, or operational notifications. This modification would improve operator awareness and system safety. Regarding claim 6, Reona does not teach the computer implemented method of claim 1 wherein identifying the sound component comprises: identifying as the sound component a desirable sound component that is to be provided to the operator interface subsystem, wherein performing the sound management action comprises amplifying the desirable sound component in the sound signal to obtain the modified sound signal. But Di Censo teaches the computer implemented method of claim 1 wherein identifying the sound component comprises: identifying as the sound component a desirable sound component that is to be provided to the operator interface subsystem, wherein performing the sound management action comprises amplifying the desirable sound component in the sound signal to obtain the modified sound signal – [Column 12, lines 49-67 “ In other embodiments, user preferences may be captured or specified using sliders or similar controls that specify sound levels or sound pressure levels (SPL) in various formats. For example, sliders or other controls may specify percentages of the initial loudness of a particular sound, or dBA SPL (where O dB is "real", or in absolute SPL). Alternatively, or in combination, sliders or other controls may be labeled "low", "normal", and "enhanced." For example, a user may move a selector or slider, such as slider 542 to a percentage value of zero ( e.g., corresponding to a "Low" value) when the user would like to attempt to completely block or cancel a particular sound. Further, the user may move a selector, such as slider 544 to a percentage value of one hundred (e.g., corresponding to a "Normal" or "Real" value) when the user would like to pass-through a particular sound. In addition, the user may move a selector, such as slider 546 to a percentage value above one-hundred (e.g., two-hundred percent) when the user would like to amplify or enhance a particular sound”]; [Column 14, lines 8-30 “While graphical user interface controls are illustrated in the representative embodiments of FIGS. 5 and 6, other types of user interfaces may be used to capture user preferences with respect to customizing the auditory environment of the user. For example, voice activated controls may be used with voice recognition of particular commands, such as "Lower Voices" or "Voices Off'. In some embodiments, the wearable device or linked mobile device may include a touch pad or screen to capture user gestures. For example, the user draws a character "V" (for voices), then swipes down (lowering this sound category). Commands or preferences may also be captured using the previously described context sensors to identify associated user gestures. For example, the user flicks his head to left (to selects voices or sound type coming from that direction), the wearable device system speaks to request confirmation "voices?", then the user lowers head (meaning, lowering this sound category). Multimodal input combinations may also be captured: e.g., user says "voices!" and at the same time swipes down on ear cup touch pad to lower voices. The user could point to a specific person and make a raise or lower gesture to amplify or lower the volume of that person's voice”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo so that important or desirable sounds detected around the work machine could be selectively amplified before being delivered to an operator. Such a modification improves the usability and effectiveness of the system in noisy work environments. Regarding claim 7, Reona does not teach the computer implemented method of claim 6 wherein identifying the sound component comprises: identifying as the desirable sound component a human voice, wherein performing the sound management action comprises amplifying the human voice in the sound signal to obtain the modified sound signal. But Di Censo teaches the computer implemented method of claim 6 wherein identifying the sound component comprises: identifying as the desirable sound component a human voice, wherein performing the sound management action comprises amplifying the human voice in the sound signal to obtain the modified sound signal –[Column 14, lines 8-31 “ While graphical user interface controls are illustrated in the representative embodiments of FIGS. 5 and 6, other 10 types of user interfaces may be used to capture user preferences with respect to customizing the auditory environment of the user. For example, voice activated controls may be used with voice recognition of particular commands, such as "Lower Voices" or "Voices Off'. In some embodiments, 15 the wearable device or linked mobile device may include a touch pad or screen to capture user gestures. For example, the user draws a character "V" (for voices), then swipes down (lowering this sound category). Commands or preferences may also be captured using the previously described 20 context sensors to identify associated user gestures. For example, the user flicks his head to left (to selects voices or sound type coming from that direction), the wearable device system speaks to request confirmation "voices?", then the user lowers head (meaning, lowering this sound category). 25 Multi-modal input combinations may also be captured: e.g., user says "voices!" and at the same time swipes down on ear cup touch pad to lower voices. The user could point to a specific person and make a raise or lower gesture to amplify or lower the volume of that person's voice. Pointing to a 30 specific device may be used to specify the user wants to change the volume of the alarm for that device only”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo so that human voice signals detected in the environment could be selectivity amplified before being delivered to the operator, thereby improving communication between workers and the machine operator in noise work environments. Regarding claim 8, Reona does teach the computer implemented method of claim 1 wherein identifying a sound component comprises: detecting a direction, relative to the work machine, from which the sound is received; working part peripheral sound microphone 32 that collects sound and a peripheral sound microphone 33 that is provided on the arm 23 and collects work sound mainly generated from the bucket 25 are provided”];[0029 “The sensor unit 30 includes a vibration sensor 34 that is provided on the cab 16 and detects vibration transmitted to the cab 16, and an inclination sensor 35 that is provided on the cab 16 and detects the inclination of the cab 16 with respect to a horizontal plane. Further, although not shown, an inertial sensor that is provided on the upper swing body 14 and detects inertia applied to the upper swing body 14 during turning, an odor sensor that is provided on the cab 16 and detects the odor around the construction machine 10, and the like”] However, Reona does not teach identifying the sound component based on the direction. But Di Censo teaches identifying the sound component based on the direction- –[Column 14, lines 8-31 “ While graphical user interface controls are illustrated in the representative embodiments of FIGS. 5 and 6, other 10 types of user interfaces may be used to capture user preferences with respect to customizing the auditory environment of the user. For example, voice activated controls may be used with voice recognition of particular commands, such as "Lower Voices" or "Voices Off'. In some embodiments, 15 the wearable device or linked mobile device may include a touch pad or screen to capture user gestures. For example, the user draws a character "V" (for voices), then swipes down (lowering this sound category). Commands or preferences may also be captured using the previously described 20 context sensors to identify associated user gestures. For example, the user flicks his head to left (to selects voices or sound type coming from that direction), the wearable device system speaks to request confirmation "voices?", then the user lowers head (meaning, lowering this sound category). 25 Multi-modal input combinations may also be captured: e.g., user says "voices!" and at the same time swipes down on ear cup touch pad to lower voices. The user could point to a specific person and make a raise or lower gesture to amplify or lower the volume of that person's voice. Pointing to a 30 specific device may be used to specify the user wants to change the volume of the alarm for that device only”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo so that the direction of sound occurring relative to the work machine could be detected and conveyed to the operator. This modification improves situational awareness and safety in work machine environments. Regarding claim 13, Reona doesn’t teach the computer implemented method of claim 1 and further comprising: generating an operator interface with a sound management configuration input mechanism; and detecting an operator actuation of the sound management configuration input mechanism to identify a sound management configuration criterion, wherein identifying the component comprises identifying the component based on the sound management configuration criterion, and wherein performing a sound management action comprises identifying the sound management action based on the sound management configuration criterion. Di Censo teaches generating an operator interface with a sound management configuration input mechanism – [column 2. Lines 7-20 “Embodiments may include receiving user preferences wirelessly from a user interface generated by a second microprocessor, which may be embedded in a mobile device, such as a cell phone, for example. The user interface may dynamically generate user controls to provide a context-sensitive user interface in response to the ambient auditory environment of the user. As such, controls may only be presented where the ambient environment includes a corresponding type or group of sounds. Embodiments may include one or more context sensors to identify expected sounds and associated spatial orientation relative to the user within the audio environment. Context sensors may include a GPS sensor, accelerometer, or gyroscope, for example, in addition to one or more microphones”]. Di Censo also teaches detecting an operator actuation of the sound management configuration input mechanism to identify a sound management configuration criterion, wherein identifying the component comprises identifying the component based on the sound management configuration criterion, and wherein performing a sound management action comprises identifying the sound management action based on the sound management configuration criterion –[Column 5, lines 62-67, column 6, lines 1-13 “ Alternatively, system 100 may include a default mode that attenuates all sounds or amplifies all sounds from the ambient environment, or attenuates or amplifies particular frequencies of ambient sounds similar to operation of more conventional noise cancelling headphones or hearing aids, respectively. In contrast to such conventional systems, user 120 may personalize or customize his/her auditory environment using system 100 by setting different user preferences applied to different types or groups of sounds selected by an associated user interface. User preferences are then communicated to the DSP associated with earpieces 134 through wired or wireless technology, such as Wi-Fi, Bluetooth, or similar technology, for example. The wearable device 130 analyzes the current audio field and sounds 102, 104, 106, 108, 110, and 112 to determine what signals to generate to achieve the user's desired auditory scene. If the user changes preferences, the system updates the configuration to reflect the changes and apply them dynamically”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with Di Censo so that an operator could input criteria for processing signals detected around machines. This would enable the operator to control how environmental signals are interpreted. This improves usability of the work machine system and allows the system to be adapted to different operator preferences. Regarding claim 14, Reona teaches a work machine, comprising: a sound sensor on the work machine configured to sense sound accuracy by using sensors such as engine sound microphones, attachment peripheral sound microphones, and vibration sensors”]. However, Reona does not teach generating a sound signal based on the sensed sound and does not teach a sound component identification system configured to identify a component of the sound in the sound signal; a component processing system configured to identify a sound management action based on the identified component; and a control signal generator configured to generate a control signal to perform the sound management action to obtain a modified sound signal and to control an operator interface subsystem to generate sound based on the modified sound signal. But Di Censo teaches generating a sound signal based on sensed sound- [Column 1, lines 52-54 “and outputting the modified signal to at least one speaker to generate the auditory environment for the user”]. Di Censo also teaches a sound component identification system configured to identify a component of the sound in the sound signal – [Column 10, lines 33-39 “ System 300 may communicate with a local or remote database or library 350 over a local or wide area network, such as the internet 352, for example. Database or library 350 may include sound libraries having stored sounds and/or associated signal characteristics for use by DSP 310 in identifying a particular type or group of sounds from the ambient audio environment”]; [Column 1, lines 46-50 –“processing the signal using a microprocessor to identify at least one of a plurality of types of sounds in the ambient auditory environment, receiving user preferences corresponding to each of the plurality of types of sounds, modifying the signal for each type of sound in the ambient auditory environment”] a component processing system configured to identify a sound management action based on the identified component- [Column 18, lines 27-45 “A system for generating an auditory environment for a user, the system comprising: a speaker; a microphone; a digital signal processor configured to receive an ambient audio signal from the microphone representing an ambient auditory environment of the user, process the ambient audio signal to identify at least one of a plurality of types of sounds in the ambient auditory environment, modify the at least one type of sound based on received user preferences received via the context-sensitive user interface”]; [Column 1, lines 50-52 “modifying the signal for each type of sound in the ambient auditory environment based on the corresponding user preference”]. and a control signal generator configured to generate a control signal to perform the sound management action to obtain a modified sound signal and to control an operator interface subsystem to generate sound based on the modified sound signal -[Column 18, lines 27-47 “A system for generating an auditory environment for a user, the system comprising: a speaker; a microphone; a digital signal processor configured to receive an ambient audio signal from the microphone representing an ambient auditory environment of the user, process the ambient audio signal to identify at least one of a plurality of types of sounds in the ambient auditory environment, modify the at least one type of sound based on received user preferences; and output the modified sound to the speaker to generate the auditory environment for the user”]; [Column 1, lines 52-54 “and outputting the modified signal to at least one speaker to generate the auditory environment for the user”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo because using the sound processing system of Di Censo with the work machine would help with sounds sensed around the work machine. They could be analyzed, classified into sound types, selectively modified and delivered to the operator. Such a modification would improve the work machine system by enabling the system to identify important auditory events such as alarms, warning signals, or human voices. It would help attenuate irrelevant noise, enhance critical sounds needed for safe machine operation, and provide the operator with a clearer and more controlled auditory environment during machine operation. Regarding claim 15, Reona teaches the work machine of claim 14 wherein the sound component identification system comprises: a machine noise identifier configured to identify, as the sound component, However, Reona does not teach the component processing system comprises an active noise cancellation processor configured to perform sound reduction to reduce the machine sound in the sound signal to obtain the modified sound signal. But Di Censo teaches a component processing system comprising an active noise cancellation processor configured to perform sound reduction to reduce the machine sound in the sound signal to obtain the modified sound signal – [Column 18, lines 52-56 “The system of claim 15 wherein the digital signal processor is configured to modify the at least one type of sound by attenuating, amplifying, or cancelling the at least one type of sound”]; [Column 9 lines 12-31 “For embodiments having intra-aural or circumoral earpieces, external sounds from the ambient auditory environment are passively attenuated before reaching the eardrums directly. These embodiments acoustically isolate the user by mechanically preventing external sound waves from reaching the ear drums. In these embodiments, the default auditory scene that the user hears without active or powered signal modification is silence or significantly reduced or muffled sounds, regardless of the actual external sounds. For the user to actually hear anything from the ambient auditory environment, the system has to detect external sounds with one or more microphones and deliver them to one or more inward-facing speakers so that they are audible to the user in the first place. Lowering or cancelling sound events may be accomplished primarily on a signal processing level. The external sound scene is analyzed, and-given the user preferences-is modified (processed) and then played back to the user through one or more inwards facing loudspeakers”]; [Column 13, lines 16-24 “In the same or other embodiments, a user may specify the sound pressure level at which a particular sound is to be produced for the user. For example, the user may specify that an alarm clock sound is to be produced at 80 dBA SPL, while a partner's alarm clock is to be produced at 30 dBA SPL. In response, the DSP 310 (FIG. 3) may increase the loudness of the user's alarm (e.g., from 60 dBA SPL to 80 dBA SPL) and reduce the loudness of the user's alarm (e.g. ,from 60 dBA SPL to 30 dBA SPL)”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo because combining a machine noise identifier with an active noise cancellation processor would create smarter, adaptive sound reduction systems. This modification would allow continues analyzes of the audio environments and would provide better voice clarity by eliminating or reducing machine nose. Regarding claim 18, Reona teaches the work machine of claim 14 wherein the sound sensor comprises: a directional microphone that senses sound in a direction and generates a directional microphone signal, upper swing body 14 during turning, an odor sensor that is provided on the cab 16 and detects the odor around the construction machine 10, and the like”]. However, Reona does not teach the sound component identification system comprises a direction processor configured to detect the direction, relative to the work machine, from which the sound is received based on the directional microphone signal and wherein the sound component identification system is configured to identify the sound component based on the direction. But Di Censo teaches sound is received based on the directional microphone signal and wherein the sound component identification system is configured to identify the sound component based on the direction – [Column 14, lines 20-25 “ For example, the user flicks his head to left (to selects voices or sound type coming from that direction), the wearable device system speaks to request confirmation "voices?", then the user lowers head (meaning, lowering this sound category)”]. [Column 2, lines 5-20 “The external device may communicate over a local or wide area network, such as the internet, and may include a database having stored sound signals of different types of sounds that may be used in identifying sound types or groups Embodiments may include receiving user preferences wirelessly from a user interface generated by a second microprocessor, which may be embedded in a mobile device, such as a cell phone, for example. The user interface may dynamically generate user controls to provide a context- sensitive user interface in response to the ambient auditory environment of the user. As such, controls may only be presented where the ambient environment includes a corresponding type or group of sounds. Embodiments may include one or more context sensors to identify expected sounds and associated spatial orientation relative to the user within the audio environment. Context sensors may include a GPS sensor, accelerometer, or gyroscope, for example, in addition to one or more microphones.”] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo in order to detect the direction from which a sound originates around the work machine. Such a configuration allows the system to determine the direction of sound sources around the work machines, thereby improving the operator’s ability to recognize where relevant sounds originate. Regarding claim 20, Reona teaches a control system, comprising: a sound sensor on the work machine configured to sense sound However, Reona does not teach generate a sound signal based on the sensed sound; a sound component identification system configured to identify a component of the sound in the sound signal; and a control signal generator configured to generate a control signal to perform a sound management action based on the identified component of the sound. But Di Censo teaches generating a sound signal based on the sensed sound; a sound component identification system configured to identify a component of the sound in the sound signal; and a control signal generator configured to generate a control signal to perform a sound management action based on the identified component of the sound- [Column 1, lines 44-46 “Embodiments according to the present disclosure include a system and method for generating an auditory environment for a user that may include receiving a signal representing an ambient auditory environment of the user”]; [Column 1, lines 46-50 –“processing the signal using a microprocessor to identify at least one of a plurality of types of sounds in the ambient auditory environment, receiving user preferences corresponding to each of the plurality of types of sounds, modifying the signal for each type of sound in the ambient auditory environment”]; [Column 1, lines 52-54 “and outputting the modified signal to at least one speaker to generate the auditory environment for the user”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo so that sounds detected around the machine could be analyzed and classified into sound types and a corresponding signal could be generated to trigger an action to notify the operator. This would improve machine operation ,thereby improving safety, responsiveness, and operator awareness in noisy work machine environments. Claim [ 9 ] is rejected under 35 U.S.C. 103 as being unpatentable over Reona (JP2019007139A) in view of Di Censo (US_20150195641A1) and in further view of Wu (US 12236692 B2). Regarding claim 9, Reona doesn’t teach the computer implemented method of claim 1 wherein identifying a sound component comprises: sensing an operator attention characteristic indicative of an attribute of operator attention; generating an operator attention signal based on the operator attention characteristic; and identifying the sound component based on the operator attention signal. But Di Censo teaches identifying the sound component It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo so it would improve the system by prioritizing operator relevant sounds and reducing irrelevant noise, thereby enhancing situational awareness, reducing cognitive load and improving operational safety. But Reona in view of Di Censo do not teach sensing an operator attention characteristic indicative of an attribute of operator attention; generating an operator attention signal based on the operator attention characteristic; and identifying the sound component based on the operator attention signal . However, Wu teaches identifying a sound component comprises: sensing an operator attention characteristic indicative of an attribute of operator attention– [Column 7, lines 37-40 “In one embodiment, and based on the detected facial features, the gaze direction estimator 108C may cause the processor(s) 108 to determine a gaze direction (e.g., for a gaze of an operator at the vehicle. In some embodiments, 40 the gaze direction estimator 108C receives a series of images (and/or video). The gaze direction estimator 108C may detect facial features in multiple images (e.g., a series or sequence of images). Accordingly, the gaze direction estimator 108C may track gaze direction over time and store 45such information, for example, in database 140.”]; [Column 4 lines 51-66 “As shown in FIG. 1A, the driver distraction system 106 is communicatively coupled to a capture device 103, which may be used to obtain current data for the driver of the vehicle 101 along with the vehicle data and scene information. In one embodiment, the capture device 103 includes sensors and other devices that are used to obtain current data for the driver 102 of the vehicle 101. The captured data may be processed by processor(s) 104, which includes hardware and/or software to detect and track driver movement, head pose and gaze direction. As will be described in additional detail below, with reference to FIG. 1B, the capture device may additionally include one or more cameras, microphones or other sensors to capture data”]; generating an operator attention signal based on the operator attention characteristic – [Column 18, lines 1-10 “The driver distraction system 106 also captures visual information 202 (e.g., driver pose and/or gaze direction and duration) of the driver 102 of the vehicle 101 at step 704. In one embodiment, the gaze direction and duration are processed to generate a driver gaze heat map 406 at step 704A. The driver gaze heat map 406 may be generated based on the gaze direction and duration of the driver 102 while driving the vehicle 101, such that the driver gaze heat map 406 identifies one or more zones in the scene information viewed by the driver during the duration.” and identifying generated based on the collected information and data. The reference heat map indicates areas or regions in the scene information for which a driver should pay attention to enhance safe driving. In one embodiment, the vehicle data is determinative of a driver's intention. For example, the driver's intention may be determined by analyzing vehicle status such as navigation routine, speed, steering wheel angle, gas paddle/break paddle etc. In a further embodiment, a gaze direction and duration of the driver is determined and a gaze trajectory is generated. The gaze trajectory represents the actual driver's attention areas or regions as it relates to the scene information, which may be generated in the form a driver gaze heat map”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona in view of Di Censo with the teaching of Wu so that operator attention signal generated by the operator monitoring system could be used as input when identifying sound components in the audio signal. In such a system, sounds originating from areas corresponding to the operator’s focus or direction of attention could be identified, prioritized, or enhanced. This is enabling the system to identify sounds associated with the operator area of attention, thus human-machine interaction and situational awareness is improved. Claim [10, 19] are rejected under 35 U.S.C. 103 as being unpatentable over Reona (JP2019007139A) in view of Di Censo (US_9716939) and in further view of Kurosawa (US 12,534,883 B2). Regarding claim 10, Reona does not teach the computer implemented method of claim 1 wherein identifying a sound component comprises: sensing, with an object sensor on the work machine, an object characteristic indicative of a characteristic of the object; generating an object characteristic signal based on the object characteristic and identifying the sound component based on the object characteristic signal. However, Di Censo teaches identifying the sound component- [Column 10, lines 34-39 “System 300 may communicate with a local or remote database or library 350 over a local or wide area network, such as the internet 352, for example. Database or library 350 may include sound libraries having stored sounds and/or associated signal characteristics for use by DSP 310 in identifying a particular type or group of sounds from the ambient audio environment”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with Di Censo because identify or distinguishing between different sound components would improve the system significantly. Incorporating this capability would improve the system by enabling identification and differentiation of sound components, allowing more effective handling of relevant sounds such as alerts or human voice in work machines. Reona in view Di Censo does not teach wherein identifying a sound component comprises: sensing, with an object sensor on the work machine, an object characteristic indicative of a characteristic of the object; generating an object characteristic signal based on the object characteristic However, Kurosawa teaches sensing, with an object sensor on the work machine, an object characteristic indicative of a characteristic of the object; generating an object characteristic signal based on the object characteristic and area set in an area surrounding the shovel 100. Furthermore, the object detector 70 may also be configured in such a manner as to be able to distinguish between types of objects, for example, in such a manner as to be able to distinguish between a person and an object other than a person. For example, the object detector 70 may be configured to be able to detect a predetermined object and distinguish between types of objects based on a predetermined model such as a pattern recognition model, a machine learning model, or the like. The object detector 70 includes a front sensor 70F, a back sensor 70B, a left sensor 70L, and a right sensor 70R. An output signal corresponding to the result of detection performed by the object detector 70 (each of the front sensor 70F, the back sensor 70B, the left sensor 70L, and the right sensor 70R) is fed into the controller 30”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo and with the teachings of Kurosawa to improve the system’s ability to associate detected sounds with physical objects in the environment. This way the system can identify sound components based on the detected object. Additionally, incorporating object based sound identification in a work machine would improve the system’s ability to distinguish sounds associated with specific objects from other environmental or machine generated sounds, thereby enhancing operator awareness and safety. Regarding claim 19, Reona does not teach the work machine of claim 14 and further comprising: an object sensor configured to sense an object characteristic indicative of a characteristic of the object and generate an object characteristic signal based on the object characteristic and wherein the sound component identification system is configured to identify the sound component based on the object characteristic signal. But, Di Censo teaches about identifying the sound component - [Column 10, lines 34-39 “System 300 may communicate with a local or remote database or library 350 over a local or wide area network, such as the internet 352, for example. Database or library 350 may include sound libraries having stored sounds and/or associated signal characteristics for use by DSP 310 in identifying a particular type or group of sounds from the ambient audio environment”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with Di Censo because identify or distinguishing between different sound components would improve the system significantly. Incorporating this capability would improve the system by enabling identification and differentiation of sound components, allowing more effective handling of relevant sounds such as alerts or human voice in work machines. Reona in view Di Censo does not teach an object sensor configured to sense an object characteristic indicative of a characteristic of the object and generate an object characteristic signal based on the object characteristic and wherein the sound component identification system is configured to Kurosawa teaches an object sensor configured to sense an object characteristic indicative of a characteristic of the object and generate an object characteristic signal based on the object characteristic and wherein the sound component identification system is configured to valve 60 to restrict the motion of the shovel 100. In this case, a target of motion restriction may be all of the driven elements or only one or some of the driven elements necessary for avoiding contact between the monitoring target object and the shovel 100”]; [Column 8, lines4 -28 “The object detector 70 detects an object in an area surrounding the shovel 100. Example of monitoring target objects include persons, animals, vehicles, construction machinery, buildings, walls, fences, and holes. The object detector 70 includes, for example, at least one of a monocular camera (an example of a camera), an ultrasonic sensor, a millimeter wave radar, a stereo camera, a LIDAR (Light Detecting and Ranging), a distance image sensor, an infrared sensor, etc. The object detector 70 may also be configured to detect a predetermined object within a predetermined area set in an area surrounding the shovel 100. Furthermore, the object detector 70 may also be configured in such a manner as to be able to distinguish between types of objects, for example, in such a manner as to be able to distinguish between a person and an object other than a person. For example, the object detector 70 may be configured to be able to detect a predetermined object and distinguish between types of objects based on a predetermined model such as a pattern recognition model, a machine learning model, or the like. The object detector 70 includes a front sensor 70F, a back sensor 70B, a left sensor 70L, and a right sensor 70R. An output signal corresponding to the result of detection performed by the object detector 70 (each of the front sensor 70F, the back sensor 70B, the left sensor 70L, and the right sensor 70R) is fed into the controller 30”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo and with the teachings of Kurosawa so the system’s ability to associate detected sounds with physical objects in the environment is improved. This way the system can identify sound components based on the detected object. Additionally, incorporating object based sound identification in a work machine would improve the system’s ability to distinguish sounds associated with specific objects from other environmental or machine generated sounds, thereby enhancing operator awareness and safety. Claim [11] is rejected under 35 U.S.C. 103 as being unpatentable over Reona (JP2019007139A) in view of Di Censo (US_9716939) and in view of Kurosawa (US 12,534,883 B2) and in further view of Hernandez-Abrego (US-8761412-B2). Regarding claim 11, Reona in view of Di Censo and in view of Kurosawa do not teach the computer implemented method of claim 10 wherein identifying the sound component comprises: aiming a microphone based on the object characteristic signal. However, Hernandez-Abrego teaches aiming a microphone based on the object characteristic signal – [Column 2, lines 14-17 “FIGS. 4A and 4B are schematic diagrams illustrating a microphone array for providing an audio signal for steering based on positional information determined from at least visual image data, according to one embodiment”]; [Column 2. Lines 45-59 “Described herein are methods and systems for filtering audio signals based on positional information determined from at least visual image data. Embodiments employ positional data determined from an image-based object tracking system in beam forming of an audio signal received with a microphone array. In an embodiment, positional information is determined through video analysis of a visual frame(s) containing an object, for example, a game motion controller. An audio filter is then to remove sound sources outside of a zone co-located with the tracked object. In certain embodiments, audio quality is improved for a target sound source, for example, a user holding the object, such that a far-field microphone having a fixed position may be utilized for purposes typically reserved for near-field microphones (e.g., speech recognition, etc.”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona in view of Di Censo and in view of Kurosawa with Hernandez-Abrego in order to capture environmental sound signals from different directions surrounding the work machine, thereby enabling the system to more effectivity monitor conditions and events around the work machine. Claim [ 12, 16 ] are rejected under 35 U.S.C. 103 as being unpatentable over Reona (JP2019007139A)) in view of Di Censo (US_20150195641A1) and in further view of Irwin (US20160379274). Regarding Claim 12, Reona in view of Di Censo does not teach the computer implemented method of claim 1 wherein identifying a sound component comprises: identifying acoustic features of the sound signal; and running a machine learning model based on the acoustic characteristics to identify the sound component. However, Irwin teaches a method of identifying a sound component comprises: identifying acoustic features of the sound signal; and running a machine learning model based on the acoustic characteristics to identify the sound component - [Column 4, lines 21-29 “The content server 130 receives advertisement information including audio ads from the advertisers 120. The content server 130 associates the audio ads with music features representing musicological characteristics of the audio ad (e.g., musical genre, instruments, emotional tone). To determine the music features associated with an audio ad, the content server 130 determines acoustic features quantitatively describing the audio ad and maps those acoustic features to corresponding music features”]; [Column 7, lines 25-40 “The ad analyzer 305 obtains an audio ad from an advertiser 120 and associates the audio ad with music features. To determine the music features, the ad analyzer 305 determines quantitative acoustic features summarizing the audio ad. These acoustic features are mapped to music features according to a music feature model. In one embodiment, the music feature model is a machine learning model including one or more classifiers to determine whether an audio ad is associated with a particular music feature according to the acoustic features of the audio ad. The music feature model depends on parameters, which are trained according to training data (e.g., audio content already associated with music features). Using the music features determined from the music feature model, the ad selector 335 selects audio ads similar to other music played at a client device 110. The ad analyzer 305 is described further with respect to FIG. 4”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona in view of Di Censo with Irwin because analyzing audio signals to determine acoustic features that quantitatively describe audio content would help in identifying or classifying sounds. It would also help the system distinguish between different sounds sources and improve the detection of relevant events in noisy work machine environments. Regarding Claim 16, Reona doesn’t teach this the work machine of claim 14 wherein the sound component identification system comprises: an acoustic characteristic identifier configured to identify a sound indicative of an alert condition of the work machine, wherein the component processing system comprises a sound insertion component configured to insert an alert sound into the sound signal to obtain the modified sound signal However, Di Censo teaches the sound component identification system comprises: such as Wi-Fi, Bluetooth, or cellular protocols. For example, a regional weather alert or Amber alert may be transmitted and received by system 100 and inserted or added to the auditory environment of the user. Depending on the particular implementation, some alerts may be processed based on user preferences, while other alerts may not be subject to various types of user preferences, such as cancellation or attenuation, for example. Alerts may include context-sensitive advertisements, announcements, or information, such as when attending a concert, sporting event, or theater, for example”]; [Column 11, 36-44 As previously described, this may include increasing level or volume, decreasing level or volume, canceling a particular sound, replacing a sound with a different sound (a combination of cancelling and inserting/adding a sound), or changing various qualities of a sound, such as equalization, pitch, etc., as represented by block 444.Desired sounds may be added or mixed with the sounds from the ambient auditory environment modified in response to the user preferences 322 and/or context sensors 330”]; [Column 10, lines 51-59 “As previously described, context-sensitive sounds or data streams representing sounds may be provided from an associated audio source 340, such as a music player, an alert US 2015/0195641 Al broadcaster, a stadium announcer, a store or theater, etc. Streaming data may be provided directly from audio source 340 to DSP 310 via a cellular connection, Bluetooth, or WiFi, for example. Data streaming or downloads may also be provided over a local or wide area network 342, such as the internet, for example”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with Di Censo to improve the handling of alerts in the work machine environment. This improves the system by ensuring that important alerts are more clearly perceived by the operator, thereby enhancing safety and responsiveness. But, Reona in view of Di Censo do not teach an acoustic characteristic identifier configured to identify a sound. However, Irwin teaches an acoustic characteristic identifier configured to identify a sound – [column 11, lines 4-15 “FIG. 4 is a high-level block diagram illustrating a detailed view of an ad analyzer 305, according to an embodiment. The ad analyzer 305 includes an acoustic feature identifier 405, an acoustic feature summarizer 410, a music feature model 415, a music feature model trainer 420, and a model feedback engine 425. Some embodiments of the ad analyzer 305 have different modules than those described here or may distribute functions in a different manner than that described here. The acoustic feature identifier 405 receives audio content (e.g., an audio ad) and determines acoustic features quantitatively describing the audio content”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona in view of Di Censo with the teachings of Irwin to apply the acoustic feature identifier techniques to the sound signals captured by microphones to generate alerts when particular acoustic characteristics are detected. This would help the system alert the operator when particular events occur, such as machine status changes, safety warnings, or operational notifications. This modification would improve operator awareness and system safety. Claim [ 17 ] is rejected under 35 U.S.C. 103 as being unpatentable over Reona (JP2019007139A) in view of Di Censo (US_20150195641A1) and in further view of SHI (CN110097875A). Regarding claim 17, Reona does not teach the work machine of claim 14 wherein the component identification system comprises: a human voice identifier configured to identify human voice that is to be provided to the operator interface subsystem, wherein the component processing system comprises an amplification component configured to amplify the human voice in the sound signal to obtain the modified sound signal. However, Di Censo teaches the component processing system comprises an amplification component configured to amplify the human voice in the sound signal to obtain the modified sound signal- [column 14, lines 26-31 “The user could point to a specific person and make a raise or lower gesture to amplify or lower the volume of that person's voice. Pointing to a specific device may be used to specify the user wants to change the volume of the alarm for that device only”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with Di Censo because incorporating the sound component identification into the work machine sound system would improve the clarify of the human voice in the sensed audio. Additionally, amplifying with audio signal would only enhance human voice relative to background noise, thereby improving operator awareness and communication overall. Reona in view of Di Censo doesn’t teach a human voice identifier configured to identify human voice that is to be provided to the operator interface subsystem, wherein the component processing system However, SHI teaches a human voice identifier configured to identify human voice that is to be provided to the operator interface subsystem – [0024 “Preferably, the electronic device is also operable to: respond to determining that a user is speaking to the electronic device at close range; determine that the user is emitting sound in one of the following ways: the user is speaking at a normal volume; the user is speaking at a low volume; the user is speaking without vocal cords; and process the sound signal differently depending on the result of the determination.”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Reona with the teachings of Di Censo and with the teachings of SHI so applying human identifier techniques into a sound processing would help identify when a sound corresponds to a human speech such as a person speaking or calling out. Additionally, Di Censo’s techniques of amplifying the human voice would help with operator awareness and improves the safety in environments where workers may be present near the work machines. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHEZA ABDUL AZIZ whose telephone number is (571)272-9610. The examiner can normally be reached Monday-Friday 7:30am-5pm Alternate Fridays off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at (571) 272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL C WASHBURN/Supervisory Patent Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Jul 11, 2024
Application Filed
Mar 28, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month