DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This final action is in response to Applicant’s amended filing of 12/01/2025.
Claims 1-4, and 6-18 are currently pending and have been examined. Applicant has amended claims 1-4, 6, and 9; canceled claim 5; and added new claims 14-18.
Response to Arguments
Applicant’s arguments with respect to claim 1 being rejected as patentably indistinct from claim 1 of Application No. 18/242,042 have been fully considered and are persuasive. The double patenting rejection against claim 1 has been withdrawn.
Applicant’s arguments with respect to claims 2-4 and 9 rejected under 35 U.S.C. 101 as being directed to or encompassing a human organism have been fully considered and are persuasive. The rejection under 35 U.S.C. 101 against claims 2-4 and 9 has been withdrawn.
Applicant’s arguments with respect to claims 1-13 rejected under 35 USC § 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Objections
Claims 3 and 9 are objected to because of the following informalities:
Applicant’s amendment to claim 3 removes “speaker” from the limitation “…wherein the first speaker includes a plurality of…”. This removal makes the limitation unclear because it does not indicate what “first” item it references. The Examiner presumes the removal is unintentional.
Additionally, Applicant’s amendment to claim 9 recites “…the voice of the user a voice is audible…” This amendment is grammatically inconsistent.
Appropriate correction is required for clarity.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 6-7, and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Yasui et al. (US 20200159014 A1; reference provided in IDS submitted 08/12/2025) in view of Yamamoto (JP 2021170710 A; reference provided in IDS submitted 08/12/2025), Milevski et al. (US 20180123813 A1) and Winn et al. (US 20090096808 A1).
Regarding claim 1, Yasui discloses an information processing system (see at least abstract disclosing image providing system) comprising:
a first device that is mounted on a mobile object boarded by an occupant (see at least ¶ [0043] and [0050] and Figs. 1-2 disclosing a first image providing device mounted on a passenger vehicle, occupied by first users);
and a second device that is used by a user at a location different from the mobile object (see at least ¶ [0043] and [0050] and Fig. 1 disclosing a second image providing device in a place other than the vehicle, operated by second users),
wherein the first device includes:
a first communication device configured to communicate with a second communication device of the second device (see at least ¶ [0044-0045] and [0062-0063] disclosing first and second image providing devices include respective communication units to communicate with each other directly or through a management server);
and wherein the second device includes:
the second communication device configured to communicate with the first communication device (see at least ¶ [0044-0045] and [0062-0063] disclosing first and second image providing devices include respective communication units to communicate with each other directly or through a management server);
a detection device for detecting an orientation direction of the user (see at least ¶ [0071] disclosing the second image providing device includes a line-of-sight sensor for detecting the line-of-sight movement of the user);
and a display device configured to display an image corresponding to the orientation direction viewed from the predetermined seat among images captured by the camera unit (see at least ¶ [0071] and Fig. 8 disclosing the second image providing device includes a human machine interface (HMI) embodied as a head mount display device that tracks a user’s line of sight and displays images from in-vehicle cameras),
wherein the second communication device transmits information on the orientation direction to the first communication device (see at least ¶ [0071-0072] disclosing the second image providing device includes a line-of-sight sensor for detecting the line-of-sight movement of the user).
While Yasui discloses a camera unit that is provided on the mobile object and has one or more cameras including at least an indoor camera capable of capturing an image of an interior of the mobile object (see at least ¶ [0046-0048] and Fig. 2 disclosing in-vehicle cameras for imaging the inside and occupants of a vehicle), Yasui does not explicitly disclose the camera unit that is provided on a predetermined seat of the mobile object and has one or more cameras including at least an indoor camera capable of capturing an image of an interior of the mobile object viewed from the predetermined seat.
However, Yamamoto teaches a vehicle with cameras mounted around the driver’s seat (see at least ¶ [0012]).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the seat-mounted cameras of Yamamoto into the image providing system of Yasui with a reasonable expectation of success because both inventions are directed toward vehicles with interior-mounted cameras. While Yamamoto directs the camera being mounted to the driver’s seat, it would be an easy matter of design choice for one of ordinary skill in the art to move the camera to other seats and would allow perspective views of other seats to be imaged.
The combination of Yasui and Yamamoto does not explicitly disclose the second device further acquires height information indicating a height of a head of the user,
the second communication device transmits the height information to the first communication device,
and the display device displays an image, which is acquired via the second communication device.
However, Milevski suggests the second device further acquires height information indicating a height of a head of the user (see at least ¶ [0015-0017] and [0053] disclosing wireless earpieces operating as virtual reality displays that detect orientation, position, and user height as information to communicate images and audio over a conferencing system),
the second communication device transmits the height information to the first communication device (see at least ¶ [0015-0017], [0051], [0053], and [0056] disclosing wireless earpieces operating as virtual reality displays that detect orientation, position, and user height as information to communicate images and audio over a conferencing system with transceivers between units),
and the display device displays an image, which is acquired via the second communication device (see at least ¶ [0015-0017] and [0053] disclosing wireless earpieces operating as virtual reality displays that detect orientation, position, and user height as information to communicate images and audio over a conferencing system).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the height information considerations of Milevski into the combination of Yasui and Yamamoto with a reasonable expectation of success because all inventions are directed to communicating and displaying image information between users. While Milevski is directed toward conferencing and not incorporated into vehicles, the intercommunication of images from one location to a virtual reality device user and responding with the user’s audio and height information is directly applicable to the systems found in Yasui and Yamamoto. This would help more accurately present information from the vehicle to the virtual reality device user and more accurately portray the user to the vehicle occupants.
While Yasui suggests the first device further includes a first control device configured to cut out, from the image is captured by the camera unit, an image corresponding to the orientation direction, which is acquired via the first communication device, and to transmit the cut image to the second communication device via the first communication device (see at least ¶ [0044-0045], [0062-0063], [0076], and [0078] disclosing first and second image providing devices include respective communication units to communicate with each other, including cut images from a person image extraction unit), the combination of Yasui, Yamamoto, and Milevski does not explicitly disclose a first control device configured to cut out, from the image is captured by the camera unit, an image corresponds to the orientation direction viewed from the height indicated by the height information.
However, Winn suggests a first control device configured to cut out, from the image is captured by the camera unit, an image corresponds to the orientation direction viewed from the height indicated by the height information (see at least ¶ [0051-0053], [0064], and [0075] disclosing object-level image editing that crops an image based on a detected object class dependent on the size of the object, including if the detected object class is a person).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the size dependent image cropping of Winn into the combination of Yasui, Yamamoto, and Milevski with a reasonable expectation of success because all inventions are directed to communicating and displaying image information between users. While Winn is directed toward digital image editing and not incorporated into vehicles, the manipulation and editing of images, especially where it pertains to the size of an image object and that information correlating to the editing procedure, is directly applicable to the systems found in Yasui, Yamamoto, and Milevski, with particular correlation to the image cutting functions of Yasui. This would help more accurately present information from the vehicle to the virtual reality device user and more accurately portray the user to the vehicle occupants.
Regarding claim 6, Yasui suggests the first control device for controls the first communication device to selectively transmit the image corresponding to the orientation direction acquired via the first communication device among the images captured by the camera unit to the second communication device (see at least ¶ [0071-0072] and [0116] disclosing the second image providing device includes a line-of-sight sensor for detecting the line-of-sight movement of the user such that in-vehicle camera provides a selected panorama image in a central visual field of the head mount display device),
and the display device of the second device displays the image corresponding to the orientation direction viewed from the predetermined seat, which is acquired via the second communication device (see at least ¶ [0071-0072] and [0116] disclosing the second image providing device includes a line-of-sight sensor for detecting the line-of-sight movement of the user such that in-vehicle camera provides a selected panorama image in a central visual field of the head mount display device).
Regarding claim 7, Yasui suggests the first communication device transmits the images captured by the camera unit to the second communication device (see at least ¶ [0071-0072] disclosing the second image providing device includes a line-of-sight sensor for detecting the line-of-sight movement of the user such that in-vehicle camera provides a selected panorama image in a central visual field of the head mount display device),
and the second device further has a second control device that causes the display device to selectively display the image corresponding to the orientation direction among the images captured by the camera unit (see at least ¶ [0071-0072] and [0116] disclosing the second image providing device includes a line-of-sight sensor for detecting the line-of-sight movement of the user such that in-vehicle camera provides a selected panorama image in a central visual field of the head mount display device).
Regarding claim 10, Yasui discloses the display device is a display device of virtual reality (VR) goggles (see at least ¶ [0071] and Fig. 8 disclosing the second image providing device includes a human machine interface (HMI) embodied as a head mount display device that tracks a user’s line of sight and displays images from in-vehicle cameras),
and the detection device includes a physical sensor attached to the VR goggles (see at least ¶ [0071] disclosing the second image providing device includes a line-of-sight sensor for detecting the line-of-sight movement of the user).
Regarding claim 11, the combination of Yasui, Yamamoto, Milevski, and Winn does not explicitly disclose the display device is capable of executing a mode in which a displayable angular range of the display device is limited.
However, it would have been obvious to one of ordinary skill in the art at the time the invention was made to design the display device to operate at a limited displayable angular range, since it has been held that where the general conditions of a claim are disclosed in the prior art, discovering the optimum or workable ranges involves only routine skill in the art. In re Aller, 105 USPQ 233.
Regarding claim 12, Yasui discloses the mobile object is a vehicle (see at least ¶ [0043] and [0050] and Figs. 1-2 disclosing a first image providing device mounted on a passenger vehicle).
Yasui does not explicitly disclose the predetermined seat is an assistant driver's seat.
However, Yamamoto teaches a vehicle with cameras mounted around the driver’s seat (see at least ¶ [0012]).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the seat-mounted cameras of Yamamoto into the image providing system of Yasui with a reasonable expectation of success because both inventions are directed toward vehicles with interior-mounted cameras. While Yamamoto directs the camera being mounted to the driver’s seat, it would be an easy matter of design choice for one of ordinary skill in the art to move the camera to other seats and would allow perspective views of other seats to be imaged.
Claims 2-4 are rejected under 35 U.S.C. 103 as being unpatentable over Yasui et al. in view of Yamamoto, Milevski, and Winn, as applied to claim 1 above, and in view of Okubo et al. (JP 2021150835 A; reference provided by IDS submitted 08/12/2025).
Regarding claim 2, the combination of Yasui, Yamamoto, Milevski, and Winn does not explicitly disclose the first speaker causes the occupant to localize a sound image so that the voice is audible from the predetermined seat and outputs the voice uttered by the user.
However, Okubo suggests the first speaker causes the occupant to localize a sound image so that the voice is audible from the predetermined seat and outputs the voice uttered by the user (see at least ¶ [0077-0078] disclosing the sound image position corresponds to a passenger seat so an occupant can localize the source).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the sound image localization of Okubo into the combination of Yasui, Yamamoto, Milevski, and Winn with a reasonable expectation of success because all inventions are directed toward communicating image and sound information between an equipment mounted on a vehicle and an outside source. This would allow the inside occupant and the outside user with the head mount display device to communicate with each other and determine their relative orientation with each other.
Regarding claim 3, the combination of Yasui, Yamamoto, Milevski, and Winn does not explicitly disclose the first speaker includes a plurality of first child speakers arranged at positions different from each other,
the first device further includes a first control device that causes the occupant to localize a sound image so that the voice is audible from the predetermined seat by adjusting a volume and/or a phase difference of the plurality of first child speakers.
However, Okubo suggests the first speaker includes a plurality of first child speakers arranged at positions different from each other (see at least ¶ [0082] disclosing a plurality of speakers positioned around the vehicle to help localize occupants),
the first device further includes a first control device that causes the occupant to localize a sound image so that the voice is audible from the predetermined seat by adjusting a volume and/or a phase difference of the plurality of first child speakers (see at least ¶ [0077-0078] disclosing the sound image position corresponds to a passenger seat so an occupant can localize the source).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the sound image localization of Okubo into the combination of Yasui, Yamamoto, Milevski, and Winn with a reasonable expectation of success because all inventions are directed toward communicating image and sound information between an equipment mounted on a vehicle and an outside source. This would allow the inside occupant and the outside user with the head mount display device to communicate with each other and determine their relative orientation with each other.
Regarding claim 4, the combination of Yasui and Yamamoto does not disclose the second device further acquires height information indicating a height of the head of the user so that the voice is audible from a height position represented by the height information on the predetermined seat.
However, Milevski suggests second device further acquires height information indicating a height of the head of the user so that the voice is audible from a height position represented by the height information on the predetermined seat (see at least ¶ [0015-0017] and [0053] disclosing wireless earpieces operating as virtual reality displays that detect orientation, position, and user height as information to communicate images and audio over a conferencing system).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the height information considerations of Milevski into the combination of Yasui and Yamamoto with a reasonable expectation of success because all inventions are directed to communicating and displaying image information between users. While Milevski is directed toward conferencing and not incorporated into vehicles, the intercommunication of images from one location to a virtual reality device user and responding with the user’s audio and height information is directly applicable to the systems found in Yasui and Yamamoto. This would help more accurately present information from the vehicle to the virtual reality device user and more accurately portray the user to the vehicle occupants.
The combination of Yasui, Yamamoto, Milevski, and Winn does not explicitly disclose the first control device causes the occupant to localize a sound image, and causes the first speaker to output the voice uttered by the first speaker.
However, Okubo suggests the first control device causes the occupant to localize a sound image, and causes the first speaker to output the voice uttered by the first speaker (see at least ¶ [0077-0078] disclosing the sound image position corresponds to a passenger seat so an occupant can localize the source).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the sound image localization of Okubo into the combination of Yasui, Yamamoto, Milevski, and Winn with a reasonable expectation of success because all inventions are directed toward communicating image and sound information between an equipment mounted on a vehicle and an outside source. This would allow the inside occupant and the outside user with the head mount display device to communicate with each other and determine their relative orientation with each other.
Claims 8 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Yasui et al. in view of Yamamoto, Milevski, and Winn, as applied to claim 1 above, and in view of Nojima (JP H11205782 A; reference provided in IDS submitted 08/12/2025).
Regarding claim 8, while Yasui discloses the first device further has at least a first microphone that collects a voice uttered by the occupant (see at least ¶ [0050] and Fig. 1 disclosing the first image providing device has a human machine interface (HMI) which includes a microphone), the combination of Yasui, Yamamoto, Milevski, and Winn does not explicitly disclose the second device further has a second speaker that outputs the voice uttered by the occupant and acquired via the second communication device,
and the first communication device transmits a voice collected by the first microphone to the second communication device.
However, Nojima suggests the second device further has a second speaker that outputs the voice uttered by the occupant and acquired via the second communication device (see at least ¶ [0023] of the machine translation disclosing a vehicle mounted terminal receives and sends voice communication information from and to a communication station via microphones),
and the first communication device transmits a voice collected by the first microphone to the second communication device (see at least ¶ [0023] of the machine translation disclosing a vehicle mounted terminal receives and sends voice communication information from and to a communication station via microphones).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the voice communication of Nojima into the combination of Yasui, Yamamoto, Milevski, and Winn with a reasonable expectation of success because both inventions are directed toward communicating image and sound information between an equipment mounted on a vehicle and an outside source. This would allow the outside user with the head mount display device to communicate with occupants of the vehicle.
Regarding claim 14, the combination of Yasui, Yamamoto, Milevski, and Winn does not explicitly disclose the second device further includes a second microphone configured to collect a voice uttered by the user,
and the second communication device transmits the voice collected by the second microphone to the first communication device.
However, Nojima suggests the second device includes a second microphone configured to collect a voice uttered by the user (see at least ¶ [0023] of the machine translation disclosing a vehicle mounted terminal receives and sends voice communication information from and to a communication station via microphones);
and the second communication device transmits a voice collected by the second microphone to the first communication device (see at least ¶ [0023] of the machine translation disclosing a vehicle mounted terminal receives and sends voice communication information from and to a communication station via microphones).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the voice communication of Nojima into the combination of Yasui, Yamamoto, Milevski, and Winn with a reasonable expectation of success because both inventions are directed toward communicating image and sound information between an equipment mounted on a vehicle and an outside source. This would allow the outside user with the head mount display device to communicate with occupants of the vehicle.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Yasui et al. in view of Yamamoto, Milevski, Winn, and Nojima, as applied to claim 8 above, and in view of Okubo et al. (JP 2021150835 A; reference provided by IDS submitted 08/12/2025).
Regarding claim 9, the combination of Yasui, Yamamoto, Milevski, Winn, and Nojima does not explicitly disclose the second speaker causes a sound image to be localized so the voice of the user a voice is audible from a position of the occupant viewed from the predetermined seat, and outputs the voice uttered by the occupant.
However, Okubo suggests the second speaker causes a sound image to be localized so the voice of the user a voice is audible from a position of the occupant viewed from the predetermined seat, and outputs the voice uttered by the occupant (see at least ¶ [0077-0078] disclosing the sound image position corresponds to a passenger seat so an occupant can localize the source).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the sound image localization of Okubo into the mbination of Yasui, Yamamoto, Milevski, Winn, and Nojima with a reasonable expectation of success because all inventions are directed toward communicating image and sound information between an equipment mounted on a vehicle and an outside source. This would allow the inside occupant and the outside user with the head mount display device to communicate with each other and determine their relative orientation with each other.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Yasui et al., Yamamoto, Milevski, and Winn, as applied to claim 1 above, and in view of Hoshi (JP 2005157762 A; reference provided by IDS submitted 08/12/2025).
Regarding claim 13, the combination of Yasui, Yamamoto, Milevski, and Winn does not explicitly disclose the display device replaces a portion of the images captured by the camera in which a predetermined article inside the mobile object is captured with an image drawn by computer processing and displays the image.
However, Hoshi suggests the display device replaces a portion of the images captured by the camera in which a predetermined article inside the mobile object is captured with an image drawn by computer processing and displays the image (see at least ¶ [0022-0023] disclosing an image processing system that extracts objects found in an image and replaces them).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the image replacement function of Hoshi into the combination of Yasui, Yamamoto, Milevski, and Winn with a reasonable expectation of success because all inventions are directed to capturing, manipulating, communicating, and displaying image information. While Hoshi is directed toward general image processing and not incorporated into vehicles, the image processing functions are directly applicable to the systems found in Yasui, Yamamoto, Milevski, and Winn because they similarly process the images captured inside the vehicle. This would allow occupants and users to identify certain features in vehicle images and single them out during remote communication.
Claims 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Yasui et al. in view of Yamamoto, Milevski, and Winn, as applied to claim 1 above, and in view of Matsumoto et al. (US 20180033151 A1).
Regarding claim 15, the combination of Yasui, Yamamoto, Milevski, and Winn does not explicitly disclose the display device displays the image in which a direction that can be visually recognized by the user is restricted according to an agreement at a time of matching the user and the occupant.
However, Matsumoto suggests the display device displays the image in which a direction that can be visually recognized by the user is restricted according to an agreement at a time of matching the user and the occupant (see at least abstract, ¶ [0104-0114] and [0129-0131], and Figs. 9A-9B and 11A-13 disclosing a video masking process for display of recorded people, where a recorded person is masked to ensure privacy during display unless specific user authentication provides authorization to view the masked person).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the image masking function of Matsumoto into the combination of Yasui, Yamamoto, Milevski, and Winn with a reasonable expectation of success because all inventions are directed to capturing, manipulating, communicating, and displaying image information. While Matsumoto is directed toward general video processing and not incorporated into vehicles specifically, the video processing functions are directly applicable to the systems found in Yasui, Yamamoto, Milevski, and Winn because they similarly process images captured inside the vehicle. This would allow users to maintain privacy until authorized to display their image to other users.
Regarding claim 16, the combination of Yasui, Yamamoto, Milevski, and Winn does not explicitly disclose the agreement is based on a request from the occupant.
However, Matsumoto suggests the agreement is based on a request from the occupant (see at least abstract, ¶ [0104-0114] and [0129-0131], and Figs. 9A-9B and 11A-13 disclosing a video masking process for display of recorded people, where a recorded person is masked to ensure privacy during display unless specific user authentication provides authorization to view the masked person).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the image masking function of Matsumoto into the combination of Yasui, Yamamoto, Milevski, and Winn with a reasonable expectation of success because all inventions are directed to capturing, manipulating, communicating, and displaying image information. While Matsumoto is directed toward general video processing and not incorporated into vehicles specifically, the video processing functions are directly applicable to the systems found in Yasui, Yamamoto, Milevski, and Winn because they similarly process images captured inside the vehicle. This would allow users to maintain privacy until authorized to display their image to other users.
Regarding claim 17, the combination of Yasui, Yamamoto, Milevski, and Winn does not explicitly disclose the first control device masks an angular range that is not visually recognized or performs correction so that the orientation direction is not oriented in a restricted direction.
However, Matsumoto suggests the first control device masks an angular range that is not visually recognized or performs correction so that the orientation direction is not oriented in a restricted direction (see at least abstract, ¶ [0104-0114] and [0129-0131], and Figs. 9A-9B and 11A-13 disclosing a video masking process for display of recorded people, where a recorded person is masked to ensure privacy during display unless specific user authentication provides authorization to view the masked person).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the image masking function of Matsumoto into the combination of Yasui, Yamamoto, Milevski, and Winn with a reasonable expectation of success because all inventions are directed to capturing, manipulating, communicating, and displaying image information. While Matsumoto is directed toward general video processing and not incorporated into vehicles specifically, the video processing functions are directly applicable to the systems found in Yasui, Yamamoto, Milevski, and Winn because they similarly process images captured inside the vehicle. This would allow users to maintain privacy until authorized to display their image to other users.
Regarding claim 18, Yasui discloses the second device further has a second control device that causes the display device to selectively display the image corresponding to the orientation direction among the images captured by the camera unit (see at least ¶ [0071-0072] and [0116] disclosing the second image providing device includes a line-of-sight sensor for detecting the line-of-sight movement of the user such that in-vehicle camera provides a selected panorama image in a central visual field of the head mount display device).
The combination of Yasui, Yamamoto, Milevski, and Winn does not explicitly disclose the second control device masks an angular range that is not visually recognized in the image or performs correction of the image so that the orientation direction is not oriented in a restricted direction.
However, Matsumoto suggests the second control device masks an angular range that is not visually recognized in the image or performs correction of the image so that the orientation direction is not oriented in a restricted direction (see at least abstract, ¶ [0104-0114] and [0129-0131], and Figs. 9A-9B and 11A-13 disclosing a video masking process for display of recorded people, where a recorded person is masked to ensure privacy during display unless specific user authentication provides authorization to view the masked person).
It would be obvious to one of ordinary skill in the art before the effective filing date of the present invention to incorporate the image masking function of Matsumoto into the combination of Yasui, Yamamoto, Milevski, and Winn with a reasonable expectation of success because all inventions are directed to capturing, manipulating, communicating, and displaying image information. While Matsumoto is directed toward general video processing and not incorporated into vehicles specifically, the video processing functions are directly applicable to the systems found in Yasui, Yamamoto, Milevski, and Winn because they similarly process images captured inside the vehicle. This would allow users to maintain privacy until authorized to display their image to other users.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JARED C BEAN whose telephone number is (571)272-5255. The examiner can normally be reached 7:30AM - 5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Navid Z Mehdizadeh can be reached at (571) 272-7691. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.C.B./Examiner, Art Unit 3669
/KENNETH M DUNNE/Primary Examiner, Art Unit 3669