DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This office action is in response to Applicant Amendments and Remarks filed on 01/16/2026, for application number 18/340,370 filed on 06/23/2023, in which claims 1-20 were previously presented for examination.
Claims 1, 12, 13, and 18 are amended.
Claim 11 is canceled.
Claims 1-10 and 12-20 are currently pending in this application
Response to Arguments
Applicant Amendments and Remarks filed on 01/16/2026 in response to the Non-Final office action mailed on 10/16/2025 have been fully considered and are addressed as follows:
Regarding the Claim Rejections under 35 USC § 103: With respect to the previous claim rejections under 35 U.S.C. § 103, Applicant has amended the independent claims and these amendments have changed the scope of the original application and the Office has supplied new grounds for rejection attached below in the Final office action and therefore the prior arguments are considered moot.
FINAL OFFICE ACTION
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4 and 13, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Bruflodt et al. (US 2022/0154423 A1, hereinafter “Bruflodt”) in view of Iwami et al. (JP 2011192070 A, hereinafter “Iwami”).
Regarding claim 1, Bruflodt discloses a method for dynamically adjusting alerts based on detected objects external to a work vehicle, the method comprising:
capturing images from a plurality of vehicle-mounted cameras, at least one of the vehicle- mounted cameras having a field of vision corresponding to an area of interest in a backward direction of movement for the work vehicle (Bruflodt at para. [0046]: “four cameras 202 are arranged at each of front, left, rear, and right sides from the perspective of a working direction of the work vehicle 100, for recording individual image regions of the surroundings of the work vehicle 100 from different image recording positions”);
generating, from the captured images of the at least one of the vehicle-mounted cameras in the backward direction of movement, object signals representative of detected objects at least the corresponding at least one field of vision (Bruflodt at para. [0076]: “A surround view image 242 in certain embodiments may be manipulated 440, such as to change the region of interest, automatically based on a trigger 435 associated with obstacle detection. If an obstacle is detected by the obstacle detection system 206, alone or in coordination with the controller 112, the surround view image 242 may be automatically changed to a sub-view 244 which focuses on the area where the object was detected”);
predicting a work state associated with at least a forward or backward movement direction for the work vehicle based on at least one operating characteristic of the work vehicle (Bruflodt at para. [0073]: “when the control system 200 determines that the self-propelled work vehicle 100 is performing a certain function, the surround view image 242 can automatically change to a smaller sub-view 244 which gives more focused visibility appropriate to that function. For example, if the work vehicle 200 is determined to be backing up straight ( or such movement is predicted based on detected steering commands or based on a detected work state consistent with such movement), the surround view image 242 changes to a sub-view 244 of the rear of the work vehicle 100”);
when a work state associated with a backward movement direction is predicted, selectively generating images on a display unit, the generated images corresponding to the at least one vehicle-mounted camera corresponding to an area of interest in a backward direction of movement for the work vehicle (Bruflodt at para. [0073]: “if the work vehicle 200 is determined to be backing up straight ( or such movement is predicted based on detected steering commands or based on a detected work state consistent with such movement), the surround view image 242 changes to a sub-view 244 of the rear of the work vehicle 100”);
determining one or more working conditions associated with visibility or certainty with respect to identifying the detected objects in at least the corresponding at least one field of vision, or lower control of the work vehicle and/or work implements thereof (Bruflodt at para. [0073]: “A surround view image 242 in certain embodiments may be manipulated 440 automatically based on a trigger 433 associated with detection of certain operating conditions or a predetermined work state” “if the work vehicle 200 is determined to be backing up straight ( or such movement is predicted based on detected steering commands or based on a detected work state consistent with such movement), the surround view image 242 changes to a sub-view 244 of the rear of the work vehicle 100” “when the work vehicle 100 is commanded to swing, the control system 200 may likewise detect the change in work state and change the surround view image 242 to a sub-view 244 showing the area into which the counterweight is swinging”; para. [0074]: “upon detecting that the work vehicle 100 is traveling faster, the control system 200 may manipulate the simulated field of view to become larger”; para. [0075]: “the simulated field of view may become automatically adjusted to increase the field of view upon detecting that a working implement 120 such as an excavator boom reaches out farther” “if it is detected that an excavator 100 is operating in a twelve-second cycle of 180-degree truck loading, then the control system 200 may anticipate when the operator is about to swing the main frame 132 of the work vehicle 100 and preemptively bring up a sub-view 244 of the area into which the operator will swing”; Moving speed and direction of the work vehicle and operation of the work implements are associated with “visibility or certainty with respect to identifying the detected objects,” or “lower control of the work vehicle and/or work implements”).
However, Bruflodt does not explicitly state:
specifying sizes of a plurality of zones concentrically extending from a point associated with the work vehicle and thresholds for alerts corresponding to a current work area, based at least in part on the determined one or more working conditions and the at least one operating characteristic of the work vehicle;
displaying first indicia with respect to the selectively generated images, based at least in part on the plurality of zones concentrically extending from a point associated with the work vehicle;
determining, from the captured images of the at least one of the vehicle-mounted cameras in the backward direction of movement, a distance to any one or more detected objects in the respective field of vision; and
displaying second indicia with respect to the selectively generated images, wherein the displayed second indicia correspond to a respective intervention state for the any one or more of the detected objects in the respective field of vision and further located within any of the one or more zones, the respective intervention state determined based on at least the determined distance, the at least one operating characteristic of the work vehicle, the predicted work state, and the threshold for alerts.
In the same field of endeavor, Iwami teaches:
specifying sizes of a plurality of zones concentrically extending from a point associated with the work vehicle and thresholds for alerts corresponding to a current work area, based at least in part on the determined one or more working conditions and the at least one operating characteristic of the work vehicle (Iwami at FIGS. 6 and 8 and para. [0056]: “if it is estimated that the visibility state is not good (No in S15), in step S17, the image region corresponding to the predicted course is identified on the display image obtained in step S11, and the prediction is performed. The TTC position for poor visibility is displayed in the image area corresponding to the course. In this embodiment, a form as shown in FIG. 6B is used as the TTC position for poor visibility. Accordingly, the first and second TTC positions are closer to the vehicle than in the case where the visibility state is good (in the case of step S16), and the positions on the image corresponding to the closer first and second TTC positions are closer to the vehicle”);
displaying first indicia with respect to the selectively generated images, based at least in part on zones concentrically extending from a point associated with the work vehicle (Iwami at FIG. 8 and para. [0053]: “FIG. 8A, an example of the first to third auxiliary lines 111 to 113 superimposed on the display image 101 is shown”);
determining, from the captured images of the at least one of the vehicle-mounted cameras in the backward direction of movement, a distance to any one or more detected objects in the respective field of vision (Iwami at para. [0057]: “the expected arrival time TTC (= distance / relative speed) can be calculated based on the distance from the host vehicle to the object and the relative speed of the vehicle with respect to the object”); and
displaying second indicia with respect to the selectively generated images, wherein the displayed second indicia correspond to a respective intervention state for the any one or more of the detected objects in the respective field of vision and further located within any of the one or more zones, the respective intervention state determined based on at least the determined distance, the at least one operating characteristic of the work vehicle, the predicted work state, and the threshold for alerts (Iwami at para. [0060]: “an example in which the object is highlighted on the display image 101 as shown in FIG. 8A is shown. In this example, as described above, the first auxiliary line 111 of green or blue is displayed at the first TTC position. Therefore, if the TTC position corresponding to the TTC time closest to the predicted arrival time TTC of the object is specified as the first TTC position, the object is displayed in green or blue as indicated by reference numeral 131”; para. [0061]: “The method for displaying the target object in a predetermined color may be, for example, converting the color of the pixel in the region extracted as the target object into the predetermined color for display, or the human of the predetermined color An icon image imitating the image may be superimposed on the area extracted as the object”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Bruflodt regarding backward object detection for the work vehicle by adding the plurality of zones of Iwami with a reasonable expectation of success. The motivation to modify the method of Bruflodt in view of Iwami is to enhance driver situational awareness.
Office Note:
The limitation “work state” is interpreted as “mode or condition of being associated with a specific task, duty, function, or assignment” (“Work.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/work. Accessed 8 Oct. 2025.; “State.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/state. Accessed 8 Oct. 2025.).
The limitation “work area” is interpreted as “the surface included within a set of lines associated with a specific task, duty, function, or assignment” (“Work.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/work. Accessed 8 Oct. 2025.; “Area.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/area. Accessed 8 Oct. 2025.).
Regarding claim 2, Bruflodt in view of Iwami teaches the method of claim 1.
Bruflodt further discloses wherein the at least one operating characteristic of the work vehicle comprises a travelling speed of the work vehicle (Bruflodt at para. [0079]: “the radius and/or depth of the bowl 240 may be automatically modified in accordance with a change in speed, wherein the overhead image 242 is accordingly modified as well”).
Regarding claim 3, Bruflodt in view of Iwami teaches the method of claim 1.
Bruflodt further discloses wherein the at least one operating characteristic of the work vehicle comprises a movement of a work implement moveable independently with respect to a frame of the work vehicle (Bruflodt at para. [0073]: “As another example, on an excavator the boom and bucket controls may be engaged in a digging or trenching pattern, wherein the control system 200 detects the relevant work state and changes the surround view image 242 to a sub-view 244 of the front of the work vehicle 100, which focuses on the boom and bucket 120. On the same excavator, when the work vehicle 100 is commanded to swing, the control system 200 may likewise detect the change in work state and change the surround view image 242 to a sub-view 244 showing the area into which the counterweight is swinging”).
Regarding claim 4, Bruflodt in view of Iwami teaches the method of claim 1.
Iwami further teaches wherein the at least one operating characteristic of the work vehicle comprises an estimated stopping distance and/or stopping time (Iwami at para. [0043]: “the distances Da3 and Db3 of the third auxiliary line 113 are determined so that they can be stopped when the vehicle performs a braking operation at a predetermined deceleration at the current time”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Bruflodt in view of Iwami by adding the estimated stopping distance and/or stopping time of Iwami with a reasonable expectation of success. The motivation to modify the method of Bruflodt in view of Iwami is to enhance driver situational awareness.
Regarding claim 13, Bruflodt discloses a work vehicle comprising:
a plurality of vehicle-mounted cameras each configured to generate image data corresponding to a respective image region, wherein at least one of the vehicle mounted cameras has a respective field of vision corresponding to an area of interest in a backward direction of movement for the work vehicle (Bruflodt at para. [0046]: “four cameras 202 are arranged at each of front, left, rear, and right sides from the perspective of a working direction of the work vehicle 100, for recording individual image regions of the surroundings of the work vehicle 100 from different image recording positions”); and
a controller linked to receive the generated image data, and configured to
predict a work state associated with at least a forward or backward movement direction for the work vehicle based on at least one operating characteristic of the work vehicle (Bruflodt at para. [0073]: “when the control system 200 determines that the self-propelled work vehicle 100 is performing a certain function, the surround view image 242 can automatically change to a smaller sub-view 244 which gives more focused visibility appropriate to that function. For example, if the work vehicle 200 is determined to be backing up straight ( or such movement is predicted based on detected steering commands or based on a detected work state consistent with such movement), the surround view image 242 changes to a sub-view 244 of the rear of the work vehicle 100”);
when a work state associated with a backward movement direction is predicted, selectively generate images on a display unit corresponding to the at least one vehicle-mounted cameras corresponding to an area of interest in a backward direction of movement for the work vehicle (Bruflodt at para. [0073]: “if the work vehicle 200 is determined to be backing up straight ( or such movement is predicted based on detected steering commands or based on a detected work state consistent with such movement), the surround view image 242 changes to a sub-view 244 of the rear of the work vehicle 100”),
determine one or more working conditions associated with visibility or certainty with respect to identifying the detected objects in at least the corresponding at least one field of vision, or lower control of the work vehicle and/or work implements thereof (Bruflodt at para. [0073]: “A surround view image 242 in certain embodiments may be manipulated 440 automatically based on a trigger 433 associated with detection of certain operating conditions or a predetermined work state” “if the work vehicle 200 is determined to be backing up straight ( or such movement is predicted based on detected steering commands or based on a detected work state consistent with such movement), the surround view image 242 changes to a sub-view 244 of the rear of the work vehicle 100” “when the work vehicle 100 is commanded to swing, the control system 200 may likewise detect the change in work state and change the surround view image 242 to a sub-view 244 showing the area into which the counterweight is swinging”; para. [0074]: “upon detecting that the work vehicle 100 is traveling faster, the control system 200 may manipulate the simulated field of view to become larger”; para. [0075]: “the simulated field of view may become automatically adjusted to increase the field of view upon detecting that a working implement 120 such as an excavator boom reaches out farther” “if it is detected that an excavator 100 is operating in a twelve-second cycle of 180-degree truck loading, then the control system 200 may anticipate when the operator is about to swing the main frame 132 of the work vehicle 100 and preemptively bring up a sub-view 244 of the area into which the operator will swing”; Moving speed and direction of the work vehicle and operation of the work implements are associated with “visibility or certainty with respect to identifying the detected objects,” or “lower control of the work vehicle and/or work implements”).
However, Bruflodt does not explicitly state:
specify sizes of a plurality of zones concentrically extending from a point associated with the work vehicle and thresholds for alerts corresponding to a current work area, based at least in part on the determined one or more working conditions and the at least one operating characteristic of the work vehicle;
display first indicia with respect to the selectively generated images, based at least in part on the plurality of zones concentrically extending from a point associated with the work vehicle;
determine, from the captured images of the at least one of the vehicle-mounted cameras in the backward direction of movement, a distance to any one or more detected objects in the respective field of vision; and
display second indicia with respect to the selectively generated images, wherein the displayed second indicia correspond to a respective intervention state for the any one or more of the detected objects in the respective field of vision and further located within any of the one or more zones, the respective intervention state determined based on at least the determined distance, the at least one operating characteristic of the work vehicle, the predicted work state, and the threshold for alerts.
In the same field of endeavor, Iwami teaches:
specify sizes of a plurality of zones concentrically extending from a point associated with the work vehicle and thresholds for alerts corresponding to a current work area, based at least in part on the determined one or more working conditions and the at least one operating characteristic of the work vehicle (Iwami at FIGS. 6 and 8 and para. [0056]: “if it is estimated that the visibility state is not good (No in S15), in step S17, the image region corresponding to the predicted course is identified on the display image obtained in step S11, and the prediction is performed. The TTC position for poor visibility is displayed in the image area corresponding to the course. In this embodiment, a form as shown in FIG. 6B is used as the TTC position for poor visibility. Accordingly, the first and second TTC positions are closer to the vehicle than in the case where the visibility state is good (in the case of step S16), and the positions on the image corresponding to the closer first and second TTC positions are closer to the vehicle”);
display first indicia with respect to the selectively generated images, based at least in part on the plurality of zones concentrically extending from a point associated with the work vehicle (Iwami at FIG. 8 and para. [0053]: “FIG. 8A, an example of the first to third auxiliary lines 111 to 113 superimposed on the display image 101 is shown”);
determine, from the captured images of the at least one of the vehicle-mounted cameras in the backward direction of movement, a distance to any one or more detected objects in the respective field of vision (Iwami at para. [0057]: “the expected arrival time TTC (= distance / relative speed) can be calculated based on the distance from the host vehicle to the object and the relative speed of the vehicle with respect to the object”); and
display second indicia with respect to the selectively generated images, wherein the displayed second indicia correspond to a respective intervention state for the any one or more of the detected objects in the respective field of vision and further located within any of the one or more zones, the respective intervention state determined based on at least the determined distance, the at least one operating characteristic of the work vehicle, the predicted work state, and the threshold for alerts (Iwami at para. [0060]: “an example in which the object is highlighted on the display image 101 as shown in FIG. 8A is shown. In this example, as described above, the first auxiliary line 111 of green or blue is displayed at the first TTC position. Therefore, if the TTC position corresponding to the TTC time closest to the predicted arrival time TTC of the object is specified as the first TTC position, the object is displayed in green or blue as indicated by reference numeral 131”; para. [0061]: “The method for displaying the target object in a predetermined color may be, for example, converting the color of the pixel in the region extracted as the target object into the predetermined color for display, or the human of the predetermined color An icon image imitating the image may be superimposed on the area extracted as the object”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the work vehicle of Bruflodt regarding backward object detection by adding the plurality of zones of Iwami with a reasonable expectation of success. The motivation to modify the method of Bruflodt in view of Iwami is to enhance driver situational awareness.
Office Note:
The limitation “work state” is interpreted as “mode or condition of being associated with a specific task, duty, function, or assignment” (“Work.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/work. Accessed 8 Oct. 2025.; “State.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/state. Accessed 8 Oct. 2025.).
The limitation “work area” is interpreted as “the surface included within a set of lines associated with a specific task, duty, function, or assignment” (“Work.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/work. Accessed 8 Oct. 2025.; “Area.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/area. Accessed 8 Oct. 2025.).
Regarding claim 18, Bruflodt discloses a system for dynamically adjusting alerts based on detected objects external to a work vehicle, the system comprising:
one or more processors communicatively linked to (Bruflodt at para. [0051]: “The controller 112 includes or may be associated with a processor 212”) a plurality of vehicle-mounted object sensors each configured to generate object signals representative of detected objects in a respective field of vision (Bruflodt at para. [0049]: “Other sensors may collectively define an obstacle detection system 206, various examples of which may include ultrasonic sensors, laser scanners, radar wave transmitters and receivers, thermal sensors, imaging devices, structured light sensors, other optical sensors, and the like”), and to a plurality of vehicle-mounted cameras each configured to generate image data corresponding to a respective image region (Bruflodt at para. [0046]: “four cameras 202 are arranged at each of front, left, rear, and right sides from the perspective of a working direction of the work vehicle 100, for recording individual image regions of the surroundings of the work vehicle 100 from different image recording positions”), wherein the respective field of vision for each of the plurality of object sensors overlaps with the image region for at least one of the plurality of cameras (Bruflodt at para. [0076]: “A surround view image 242 in certain embodiments may be manipulated 440, such as to change the region of interest, automatically based on a trigger 435 associated with obstacle detection”);
wherein the one or more processors are configured to
predict a work state associated with at least a forward or backward movement direction for the work vehicle based on at least one operating characteristic of the work vehicle (Bruflodt at para. [0073]: “when the control system 200 determines that the self-propelled work vehicle 100 is performing a certain function, the surround view image 242 can automatically change to a smaller sub-view 244 which gives more focused visibility appropriate to that function. For example, if the work vehicle 200 is determined to be backing up straight ( or such movement is predicted based on detected steering commands or based on a detected work state consistent with such movement), the surround view image 242 changes to a sub-view 244 of the rear of the work vehicle 100”);
when a work state associated with a backward movement direction is predicted, selectively generate images on a display unit corresponding to the at least one vehicle-mounted cameras corresponding to an area of interest in a backward direction of movement for the work vehicle (Bruflodt at para. [0073]: “if the work vehicle 200 is determined to be backing up straight ( or such movement is predicted based on detected steering commands or based on a detected work state consistent with such movement), the surround view image 242 changes to a sub-view 244 of the rear of the work vehicle 100”),
determine one or more working conditions associated with visibility or certainty with respect to identifying the detected objects in at least the corresponding at least one field of vision, or lower control of the work vehicle and/or work implements thereof (Bruflodt at para. [0073]: “A surround view image 242 in certain embodiments may be manipulated 440 automatically based on a trigger 433 associated with detection of certain operating conditions or a predetermined work state” “if the work vehicle 200 is determined to be backing up straight ( or such movement is predicted based on detected steering commands or based on a detected work state consistent with such movement), the surround view image 242 changes to a sub-view 244 of the rear of the work vehicle 100” “when the work vehicle 100 is commanded to swing, the control system 200 may likewise detect the change in work state and change the surround view image 242 to a sub-view 244 showing the area into which the counterweight is swinging”; para. [0074]: “upon detecting that the work vehicle 100 is traveling faster, the control system 200 may manipulate the simulated field of view to become larger”; para. [0075]: “the simulated field of view may become automatically adjusted to increase the field of view upon detecting that a working implement 120 such as an excavator boom reaches out farther” “if it is detected that an excavator 100 is operating in a twelve-second cycle of 180-degree truck loading, then the control system 200 may anticipate when the operator is about to swing the main frame 132 of the work vehicle 100 and preemptively bring up a sub-view 244 of the area into which the operator will swing”; Moving speed and direction of the work vehicle and operation of the work implements are associated with “visibility or certainty with respect to identifying the detected objects,” or “lower control of the work vehicle and/or work implements”).
However, Bruflodt does not explicitly state:
specify sizes of a plurality of zones concentrically extending from a point associated with the work vehicle and thresholds for alerts corresponding to a current work area, based at least in part on the determined one or more working conditions and the at least one operating characteristic of the work vehicle;
display first indicia with respect to the selectively generated images, based at least in part on the plurality of zones concentrically extending from a point associated with the work vehicle;
determine, from the captured images of the at least one of the vehicle-mounted cameras in the backward direction of movement, a distance to any one or more detected objects in the respective field of vision; and
display second indicia with respect to the selectively generated images, wherein the displayed second indicia correspond to a respective intervention state for the any one or more of the detected objects in the respective field of vision and further located within any of the one or more zones, the respective intervention state determined based on at least the determined distance, the at least one operating characteristic of the work vehicle, the predicted work state, and the threshold for alerts.
In the same field of endeavor, Iwami teaches:
specify sizes of a plurality of zones concentrically extending from a point associated with the work vehicle and thresholds for alerts corresponding to a current work area, based at least in part on the determined one or more working conditions and the at least one operating characteristic of the work vehicle (Iwami at FIGS. 6 and 8 and para. [0056]: “if it is estimated that the visibility state is not good (No in S15), in step S17, the image region corresponding to the predicted course is identified on the display image obtained in step S11, and the prediction is performed. The TTC position for poor visibility is displayed in the image area corresponding to the course. In this embodiment, a form as shown in FIG. 6B is used as the TTC position for poor visibility. Accordingly, the first and second TTC positions are closer to the vehicle than in the case where the visibility state is good (in the case of step S16), and the positions on the image corresponding to the closer first and second TTC positions are closer to the vehicle”);
display first indicia with respect to the selectively generated images, based at least in part on the plurality of zones concentrically extending from a point associated with the work vehicle (Iwami at FIG. 8 and para. [0053]: “FIG. 8A, an example of the first to third auxiliary lines 111 to 113 superimposed on the display image 101 is shown”);
determine, from the captured images of the at least one of the vehicle-mounted cameras in the backward direction of movement, a distance to any one or more detected objects in the respective field of vision (Iwami at para. [0057]: “the expected arrival time TTC (= distance / relative speed) can be calculated based on the distance from the host vehicle to the object and the relative speed of the vehicle with respect to the object”); and
display second indicia with respect to the selectively generated images, wherein the displayed second indicia correspond to a respective intervention state for the any one or more of the detected objects in the respective field of vision and further located within any of the one or more zones, the respective intervention state determined based on at least the determined distance, the at least one operating characteristic of the work vehicle, the predicted work state, and the threshold for alerts (Iwami at para. [0060]: “an example in which the object is highlighted on the display image 101 as shown in FIG. 8A is shown. In this example, as described above, the first auxiliary line 111 of green or blue is displayed at the first TTC position. Therefore, if the TTC position corresponding to the TTC time closest to the predicted arrival time TTC of the object is specified as the first TTC position, the object is displayed in green or blue as indicated by reference numeral 131”; para. [0061]: “The method for displaying the target object in a predetermined color may be, for example, converting the color of the pixel in the region extracted as the target object into the predetermined color for display, or the human of the predetermined color An icon image imitating the image may be superimposed on the area extracted as the object”).
Office Note:
The limitation “work state” is interpreted as “mode or condition of being associated with a specific task, duty, function, or assignment” (“Work.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/work. Accessed 8 Oct. 2025.; “State.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/state. Accessed 8 Oct. 2025.).
The limitation “work area” is interpreted as “the surface included within a set of lines associated with a specific task, duty, function, or assignment” (“Work.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/work. Accessed 8 Oct. 2025.; “Area.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/area. Accessed 8 Oct. 2025.).
Claims 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Bruflodt in view of Iwami further in view of Stratton et al. (US 2009/0043462 A1, hereinafter “Stratton”).
Regarding claim 10, Bruflodt in view of Iwami teaches the method of claim 1.
However, Bruflodt in view of Iwami does not explicitly state further comprising generating control signals to one or more actuators for adjusting the at least one operating characteristic of the work vehicle based on at least the respective intervention state.
In the same field of endeavor, Stratton teaches further comprising generating control signals to one or more actuators for adjusting the at least one operating characteristic of the work vehicle based on at least the respective intervention state (Stratton at para. [0040]: “the command signals may include such commands as velocity limit commands, acceleration limit commands, braking commands, and direction commands. That is, controller 42 may generate control commands to limit a maximum velocity or acceleration of machine 12, to cause machine 12 to slow down or come to a full stop, or to redirect machine 12 along an avoidance trajectory”; para. [0054]: “Controller 36 may then evaluate the instructions of the signal and manipulate the braking element of machine 12 to bring machine 12 to a full stop. The stopping signal may keep machine 12 from colliding with the obstacle”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Bruflodt in view of Iwami by adding the control signals as taught by Stratton with a reasonable expectation of success. The motivation to modify the method of Bruflodt in view of Iwami further in view of Stratton is to provide sufficient warning for the operator to adequately maneuver the machine away from damaging encounters (Stratton at para. [0003]).
Regarding claim 17, Bruflodt in view of Iwami teaches the work vehicle of claim 13.
However, Bruflodt in view of Iwami does not explicitly state wherein the controller is further configured to generate control signals to one or more actuators for adjusting the at least one operating characteristic of the work vehicle based on at least the respective intervention state.
In the same field of endeavor, Stratton teaches wherein the controller is further configured to generate control signals to one or more actuators for adjusting the at least one operating characteristic of the work vehicle based on at least the respective intervention state (Stratton at para. [0040]: “the command signals may include such commands as velocity limit commands, acceleration limit commands, braking commands, and direction commands. That is, controller 42 may generate control commands to limit a maximum velocity or acceleration of machine 12, to cause machine 12 to slow down or come to a full stop, or to redirect machine 12 along an avoidance trajectory”; para. [0054]: “Controller 36 may then evaluate the instructions of the signal and manipulate the braking element of machine 12 to bring machine 12 to a full stop. The stopping signal may keep machine 12 from colliding with the obstacle”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the work vehicle of Bruflodt in view of Iwami by adding the control signals as taught by Stratton with a reasonable expectation of success. The motivation to modify the work vehicle of Bruflodt in view of Iwami further in view of Stratton is to provide sufficient warning for the operator to adequately maneuver the machine away from damaging encounters (Stratton at para. [0003]).
Claims 5-9, 14-16, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bruflodt in view of Iwami further in view Nakano et al. (US 2019/0005726 A1, hereinafter “Nakano”).
Regarding claim 5, Bruflodt in view of Iwami teaches the method of claim 1.
However, Bruflodt in view of Iwami does not explicitly state wherein the respective intervention state corresponds to an estimated time to engagement by the work vehicle with the respective detected object based on the at least determined distance and the at least one operating characteristic of the work vehicle.
In the same field of endeavor, Nakano teaches wherein the respective intervention state corresponds to an estimated time to engagement by the work vehicle with the respective detected object based on the at least determined distance and the at least one operating characteristic of the work vehicle (Nakano at FIG. 6 and para. [0070]: “display controller 52 of controller 5 may set, as the display objects, both the first detection object that exists closest to vehicle 100 (host vehicle) and the second detection object having the shortest time to collision in each of detection areas 401 to 404”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Bruflodt in view of Iwami by adding the estimated time to engagement as taught by Nakano et al. with a reasonable expectation of success. The motivation to modify the method of Bruflodt in view of Iwami further in view of Nakano is to provide a warning based on a degree of danger with respect to an object which exists in the danger latent area (Nakano at para. [0003]).
Regarding claim 6, Bruflodt in view of Iwami further in view of Nakano teaches the method of claim 5.
Nakano further teaches wherein the respective intervention state corresponds to an estimated time to engagement by the work vehicle with the respective detected object further based on a potential movement state of the respective detected object (Nakano at para. [0043]: “Detection system 7 acquires information, such as a distance from vehicle 100 to detection object 700, a relative coordinate of detection object 700 to vehicle 100, a relative velocity between detection object 700 and vehicle 100”; para. [0054]: “The time to collision is time until the host vehicle collides with the detection object when the present relative velocity between the host vehicle and the detection object is maintained”; para. [0055]: “controller 5 may determine detection object 700 having the highest degree of danger as the display object based on the relative coordinate between detection object 700 and vehicle 100 (host vehicle), movement directions of detection object 700 and the host vehicle, and a predicted value of travelling speed”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Bruflodt in view of Iwami further in view of Nakano by adding the potential movement state of the respective detected object as taught by Nakano with a reasonable expectation of success. The motivation to modify the method of Bruflodt in view of Iwami further in view of Nakano is to provide a warning based on a degree of danger with respect to an object which exists in the danger latent area (Nakano at para. [0003]).
Regarding claim 7, Bruflodt in view of Iwami further in view of Nakano teaches the method of claim 6.
Nakano further teaches wherein the respective intervention state corresponds to an estimated time to engagement by the work vehicle with the respective detected object further based on a detected movement and/or predicted future position of the respective detected object (Nakano at para. [0043]: “Detection system 7 acquires information, such as a distance from vehicle 100 to detection object 700, a relative coordinate of detection object 700 to vehicle 100, a relative velocity between detection object 700 and vehicle 100”; para. [0054]: “The time to collision is time until the host vehicle collides with the detection object when the present relative velocity between the host vehicle and the detection object is maintained”; para. [0055]: “controller 5 may determine detection object 700 having the highest degree of danger as the display object based on the relative coordinate between detection object 700 and vehicle 100 (host vehicle), movement directions of detection object 700 and the host vehicle, and a predicted value of travelling speed”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the respective intervention state of Bruflodt in view of Iwami further in view of Nakano to be based on the detected movement and/or predicted future position of the respective detected object as taught by Nakano with a reasonable expectation of success. The motivation to modify the respective intervention state of Bruflodt in view of Iwami further in view of Nakano is to provide a warning based on a degree of danger with respect to an object which exists in the danger latent area (Nakano at para. [0003]).
Regarding claim 8, Bruflodt in view of Iwami further in view of Nakano teaches the method of claim 6.
Nakano further teaches wherein the displayed second indicia comprises a bounded area about the respective detected object based on the potential movement state thereof (Nakano at FIG. 6 and para. [0053]: “Each of virtual images 311, 321, 331 may be a frame-like virtual image surrounding the display object and may be appropriately changed. For example, controller 5 may change the content of each of virtual images 311, 321, 331 according to the attributes of detection objects 711, 721, 731 serving as the display objects”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Bruflodt in view of Iwami further in view of Nakano by adding the bounded area as taught by Nakano with a reasonable expectation of success. The motivation to modify the method of Bruflodt in view of Iwami further in view of Nakano is to provide a warning based on a degree of danger with respect to an object which exists in the danger latent area (Nakano at para. [0003]).
Regarding claim 9, Bruflodt in view of Iwami further in view of Nakano teaches the method of claim 5.
Iwami further teaches wherein the respective intervention state further corresponds to a predicted response in the at least one operating characteristic of the work vehicle (Iwami at para. [0043]: “the distances Da3 and Db3 of the third auxiliary line 113 are determined so that they can be stopped when the vehicle performs a braking operation at a predetermined deceleration at the current time”; The “predicted response” includes the braking operation).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Bruflodt in view of Iwami further in view of Nakano by adding the predicted response of Iwami with a reasonable expectation of success. The motivation to modify the method of Bruflodt in view of Iwami further in view of Nakano is to enhance driver situational awareness.
Regarding claim 14, Bruflodt in view of Iwami teaches the work vehicle of claim 13.
However, Bruflodt in view of Iwami does not explicitly state wherein the respective intervention state corresponds to an estimated time to engagement by the work vehicle with the respective detected object based on the at least determined distance and the at least one operating characteristic of the work vehicle.
In the same field of endeavor, Nakano teaches wherein the respective intervention state corresponds to an estimated time to engagement by the work vehicle with the respective detected object based on the at least determined distance and the at least one operating characteristic of the work vehicle (Nakano at FIG. 6 and para. [0070]: “display controller 52 of controller 5 may set, as the display objects, both the first detection object that exists closest to vehicle 100 (host vehicle) and the second detection object having the shortest time to collision in each of detection areas 401 to 404”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the work vehicle of Bruflodt in view of Iwami by adding the estimated time to engagement as taught by Nakano with a reasonable expectation of success. The motivation to modify the work vehicle of Bruflodt in view of Iwami further in view of Nakano is to provide a warning based on a degree of danger with respect to an object which exists in the danger latent area (Nakano at para. [0003]).
Regarding claim 15, Bruflodt in view of Iwami further in view of Nakano teaches the work vehicle of claim 14.
Nakano further teaches wherein the respective intervention state corresponds to an estimated time to engagement by the work vehicle with the respective detected object further based on a potential movement state of the respective detected object (Nakano at para. [0043]: “Detection system 7 acquires information, such as a distance from vehicle 100 to detection object 700, a relative coordinate of detection object 700 to vehicle 100, a relative velocity between detection object 700 and vehicle 100”; para. [0054]: “The time to collision is time until the host vehicle collides with the detection object when the present relative velocity between the host vehicle and the detection object is maintained”; para. [0055]: “controller 5 may determine detection object 700 having the highest degree of danger as the display object based on the relative coordinate between detection object 700 and vehicle 100 (host vehicle), movement directions of detection object 700 and the host vehicle, and a predicted value of travelling speed”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the work vehicle of Bruflodt in view of Iwami further in view of Nakano by adding the potential movement state of the respective detected object as taught by Nakano with a reasonable expectation of success. The motivation to modify the work vehicle of Bruflodt in view of Iwami further in view of Nakano is to provide a warning based on a degree of danger with respect to an object which exists in the danger latent area (Nakano at para. [0003]).
Regarding claim 16, Bruflodt in view of Iwami further in view of Nakano teaches the work vehicle of claim 15.
Nakano further teaches wherein the displayed second indicia comprises a bounded area about the respective detected object based on the potential movement state thereof (Nakano at FIG. 6 and para. [0053]: “Each of virtual images 311, 321, 331 may be a frame-like virtual image surrounding the display object and may be appropriately changed. For example, controller 5 may change the content of each of virtual images 311, 321, 331 according to the attributes of detection objects 711, 721, 731 serving as the display objects”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the work vehicle of Bruflodt in view of Iwami further in view of Nakano by adding the bounded area as taught by Nakano with a reasonable expectation of success. The motivation to modify the work vehicle of Bruflodt in view of Iwami further in view of Nakano is to provide a warning based on a degree of danger with respect to an object which exists in the danger latent area (Nakano at para. [0003]).
Regarding claim 19, Bruflodt in view of Iwami teaches the system of claim 18.
However, Bruflodt in view of Iwami does not explicitly state wherein the respective intervention state corresponds to an estimated time to engagement by the work vehicle with the respective detected object based on the at least determined distance and the at least one operating characteristic of the work vehicle.
In the same field of endeavor, Nakano teaches wherein the respective intervention state corresponds to an estimated time to engagement by the work vehicle with the respective detected object based on the at least determined distance and the at least one operating characteristic of the work vehicle (Nakano et al. at FIG. 6 and para. [0070]: “display controller 52 of controller 5 may set, as the display objects, both the first detection object that exists closest to vehicle 100 (host vehicle) and the second detection object having the shortest time to collision in each of detection areas 401 to 404”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Bruflodt in view of Iwami by adding the estimated time to engagement as taught by Nakano with a reasonable expectation of success. The motivation to modify the system of Bruflodt in view of Iwami further in view of Nakano is to provide a warning based on a degree of danger with respect to an object which exists in the danger latent area (Nakano at para. [0003]).
Regarding claim 20, Bruflodt in view of Iwami teaches the system of claim 18.
However, Bruflodt in view of Iwami does not explicitly state wherein the respective intervention state corresponds to an estimated time to engagement by the work vehicle with the respective detected object further based on a potential movement state of the respective detected object.
In the same field of endeavor, Nakano teaches wherein the respective intervention state corresponds to an estimated time to engagement by the work vehicle with the respective detected object (Nakano at FIG. 6 and para. [0070]: “display controller 52 of controller 5 may set, as the display objects, both the first detection object that exists closest to vehicle 100 (host vehicle) and the second detection object having the shortest time to collision in each of detection areas 401 to 404”) further based on a potential movement state of the respective detected object (Nakano at para. [0043]: “Detection system 7 acquires information, such as a distance from vehicle 100 to detection object 700, a relative coordinate of detection object 700 to vehicle 100, a relative velocity between detection object 700 and vehicle 100”; para. [0054]: “The time to collision is time until the host vehicle collides with the detection object when the present relative velocity between the host vehicle and the detection object is maintained”; para. [0055]: “controller 5 may determine detection object 700 having the highest degree of danger as the display object based on the relative coordinate between detection object 700 and vehicle 100 (host vehicle), movement directions of detection object 700 and the host vehicle, and a predicted value of travelling speed”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Bruflodt in view of Iwami by adding the potential movement state of the respective detected object as taught by Nakano with a reasonable expectation of success. The motivation to modify the system of Bruflodt in view of Iwami further in view of Nakano is to provide a warning based on a degree of danger with respect to an object which exists in the danger latent area (Nakano at para. [0003]).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Bruflodt in view of Iwami further in view of Numata et al. (US 2013/0307986 A1, hereinafter “Numata”).
Regarding claim 12, Bruflodt in view of Iwami teaches the method of claim 1.
However, Bruflodt in view of Iwami does not explicitly states wherein each of the plurality of zones are non-linearly scaled with respect to others of the plurality of zones based on changes in travelling speed of the work vehicle over time.
In the same field of endeavor, Numata teaches wherein each of the plurality of zones are non-linearly scaled with respect to others of the plurality of zones based on changes in travelling speed of the work vehicle over time (Numata at para. [0056]: “a numeral 21 indicates a superimposed image indicating a dangerous area, a numeral 22 indicates a superimposed image indicating a semi-dangerous area and a numeral 23 indicates a superimposed image indicating path anticipating guide lines”; para. [0057]: “lengths of the dangerous area and the semi-dangerous area in a driving direction are short at low speed, and lengths of the dangerous area and the semi-dangerous area in a driving direction are lengthened at high speed. This is because a braking distance is lengthened at high speed and is shortened at low speed”; para. [0059]: “a brake characteristic (speed-time graph) just before the vehicle 30 is stopped at the back starting position is shown in FIG. 6. This shows that a vehicle driving at constant speed is brake-pedaled and the speed thereof becomes zero (0)”; As shown in FIG.6, the braking distance and the vehicle speed has a non-linear relationship).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Bruflodt in view of Iwami by adding the plurality of zones as taught by Numata et al. with a reasonable expectation of success. The motivation to modify the zones of Bruflodt in view of Iwami further in view of Numata is to provide clear perception of a braking range (see Numata at para. [0057]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and can be found in the attached PTO-892 form.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JISUN CHOI whose telephone number is (571)270-0710. The examiner can normally be reached Mon-Fri, 9:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott Browne can be reached on (571)270-0151. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JISUN CHOI/Examiner, Art Unit 3666
/SCOTT A BROWNE/Supervisory Patent Examiner, Art Unit 3666