DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eubanks (US 11,956,623 B2) and MacDermot (US 12,002,166 B2).
Claim 1, Eubank et al. disclose of a method for rendering a soundscape of a room, the method comprising: obtaining dimensions of the room (fig.2 (110); col.10 line 44-60); obtaining current position and orientation of a user device in the room (fig.2 (114); col.10 line 32-36; col.13 line 40-45); assigning one or more surfaces of the room with respective acoustical properties (fig.2 (106); col.10 line 60-67; col.11 line 1-10; col.13 line 5-15); calculating the soundscape in the room based on one or more sound sources, the acoustical properties of the one or more surfaces of the room and the dimensions of the room (col.5 line 1-6 & col.13 line 15-27).
However, although, rendering of the sound soundscape is mentioned but, the art never specify as rendering the soundscape of the room by generating a virtual representation of the soundscape with respect to the current position and orientation of the user device and overlaying the virtual representation of the soundscape on a video stream of the room on a screen of the user device thereby forming a completely or partially rendered representation of the soundscape of the room.
But it shall be noted MacDermot disclose of such similar concept related to as rendering the soundscape of the room by generating a virtual representation of the soundscape and overlaying the virtual representation of the soundscape on a video stream of the room on a screen of the user device thereby forming a completely or partially rendered representation of the soundscape of the room (fig.4B (600-620); col.13 line 55-65; col.14 line 35-45). Thus, one of the ordinary skills in the art could have modified the art by adding such noted concept related to as rendering the soundscape of the room by generating a virtual representation of the soundscape and overlaying the virtual representation of the soundscape on a video stream of the room on a screen of the user device thereby forming a completely or partially rendered representation of the soundscape of the room so as to account for change of soundscape dependent on acoustic properties of the virtual object.
Although, the art never specify of such rendering the soundscape of the room by generating a virtual representation of the soundscape with respect to the current position and orientation of the user device, but one of the ordinary skills in the art could have modified the various audio adjustment and including virtual representation of the sound with respect to the current position and orientation of the user device (Eu-col.2 line 25-35 & col.13 line 40-45) by adding such noted rendering the soundscape of the room by generating a virtual representation of the soundscape with respect to the current position and orientation of the user device for achieving the same result as to so as to account for change of soundscape dependent on acoustic properties of the virtual object with respect to fixed physical structure.
Claim 2, the method according to claim 1, but the prior art lacked further comprising: placing a virtual object having known acoustical properties in the room, wherein calculating the soundscape is further based on a position and orientation of the virtual object in the room and the acoustical properties of the virtual object.
But, again MacDermot disclose of such similar concept related to as placing a virtual object having known acoustical properties in the room, wherein calculating the soundscape is further based on a position and orientation of the virtual object in the room and the acoustical properties of the virtual object (fig.4B (600-620); col.13 line 55-65; col.14 line 35-45). Thus, one of the ordinary skills in the art could have modified the art by adding such noted concept related to placing a virtual object having known acoustical properties in the room, wherein calculating the soundscape is further based on a position and orientation of the virtual object in the room and the acoustical properties of the virtual object so as to account for change of soundscape dependent on acoustic properties of the virtual object.
3. (Currently Amended) The method according to claim 2, further comprising overlaying a virtual representation of the virtual object at its position and with its orientation on the video stream (col.13 line 65-col.10/since the virtual object will be on augmented device as it play video and thus inherently such overlaying of the virtual with video stream).
4. (Currently Amended) The method according to the claim (s) 1, but the prior art never specify as wherein obtaining dimensions of the room comprises determining the dimensions of the room by scanning the room with the user device.
But, MacDermot disclose of the similar concept related to obtaining dimensions of the room comprises determining the dimensions of the room by scanning the room with the user device (col.3 line 45-67). Thus, one of the ordinary skills in the art could have modified the prior art by adding such similar concept related to obtaining dimensions of the room comprises determining the dimensions of the room by scanning the room with the user device so as to obtain the proper dimensions of the rooms.
5. (Currently Amended) The method according to the claim[[s]] 1, wherein the one or more sound sources are virtual sound sources (Eu-col.2 line 35-25; col.13 line 50-60).
6. (Currently Amended) The method according to any one of the claims 5, further comprising superposing the one or more virtual sound sources to one or more real sound sources (Eu-col.2 line 35-67).
7. (Currently Amended) The method according to any one of the claim[[s]] 1, wherein the video stream of the room is a real depiction of the room (Eu-col.2 line 30-40).
8. (Currently Amended) The method (300) according to any one of the claim[[s]] 1to 7, wherein the video stream of the room is a virtual depiction of the room (Eu-col.2 line 5-15).
9. (Currently Amended) The method (300) according to any one of the claim[[s]] 1 to 8, wherein assigning (5306) the one or more surfaces with respective acoustical properties comprises: scanning, by the user device, the one or more surfaces, determining, from the scan, a material of each of the one or more surfaces (fig.2 (104/106); col.10 line 25-20 & line 60-67) ,and assigning each of the one or more surfaces with acoustical properties associated with the respective determined material (col.11 line 1-10).
Claim 10 which in substance disclose of the same substance as in claim(s) 1 has been analyzed and rejected accordingly.
Claim 11, the prior art herein as Eubank et al. as a user device for rendering a soundscape of a room, the user device (fig.5 (202); col.13 line 28-35); comprising: circuitry configured to execute: a first obtaining function configured to obtain dimensions of the room (fig.2 (110); col.10 line 44-60); a second obtaining function (212) configured to obtain current position and orientation of the user device in the room (fig.2 (114); col.10 line 32-36; col.13 line 40-45); an assigning function (214) configured to assign one or more surfaces of the room with respective acoustical properties (col.5 line 1-6 & col.13 line 15-27); a calculating function (216) configured to calculate the soundscape in the room based on one or more sound sources, the acoustical properties of the one or more surfaces of the room and the dimensions of the room (col.5 line 1-6 & col.13 line 15-27).
However, although, rendering of the sound soundscape is mentioned but, the art as in Eubank failed to mentioned of rendering function (218) configured to render the soundscape of the room by generating a virtual representation of the soundscape with respect to the current position and orientation of the user device and overlaying the virtual representation of the soundscape on a video stream of the room on a screen of the user device thereby forming a completely or partially rendered representation of the soundscape of the room.
But it shall be noted MacDermot disclose of such similar concept related to as rendering function (218) configured to render the soundscape of the room by generating a virtual representation of the soundscape and overlaying the virtual representation of the soundscape on a video stream of the room on a screen of the user device thereby forming a completely or partially rendered representation of the soundscape of the room (fig.4B (600-620); col.13 line 55-65; col.14 line 35-45). Thus, one of the ordinary skills in the art could have modified the art by adding such noted concept related to as rendering the soundscape of the room by generating a virtual representation of the soundscape and overlaying the virtual representation of the soundscape on a video stream of the room on a screen of the user device thereby forming a completely or partially rendered representation of the soundscape of the room so as to account for change of soundscape dependent on acoustic properties of the virtual object.
Although, the art never specify of such rendering the soundscape of the room by generating a virtual representation of the soundscape with respect to the current position and orientation of the user device, but one of the ordinary skills in the art could have modified the various audio adjustment and including virtual representation of the sound with respect to the current position and orientation of the user device (Eu-col.2 line 25-35 & col.13 line 40-45) by adding such noted rendering the soundscape of the room by generating a virtual representation of the soundscape with respect to the current position and orientation of the user device for achieving the same result as to so as to account for change of soundscape dependent on acoustic properties of the virtual object with respect to fixed physical structure.
The claim(s) 12-13 which in substance disclose of the similar feature as in claim(s) 2-3 has been analyzed and rejected accordingly.
The claim(s) 14 which in substance disclose of the similar feature as in claim(s) 4 has been analyzed and rejected accordingly.
15. (Currently Amended) The user device according to any one of the claim[[s]] 11, wherein the assigning function is further configured to: scan, by the user device (200), the one or more surfaces, determine, from the scan, a material of each of the one or more surfaces (fig.2 (104/106); col.10 line 25-20 & line 60-67), and assign each of the one or more surfaces with acoustical properties associated with the respective determined material (col.11 line 1-10).
.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DISLER PAUL whose telephone number is (571)270-1187. The examiner can normally be reached 9:00-6:00 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chin, Vivian can be reached at (571) 272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DISLER PAUL/Primary Examiner, Art Unit 2695