DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
1.The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
2.Claim(s) 1, 7-15 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Desai et al. (US 20170195562) in view of Cook et al. (US 20180350363).
Regarding claim 1, Desai discloses a method of operating an independent recording device (Fig. A, 4), the method comprising: capturing, with a panoramic camera system in the recording device (Paragraphs: 0004 and 0023: Desai discusses how a panoramic video content may include capturing video content; and how the system recording or streaming video with the panoramic camera); capturing, with a plurality of microphones in the recording device, an audio recording (Paragraphs: 0039 and 0052: Desai discusses microphones with enhanced audio recording capabilities); and performing at least one of storing the video recording and the audio recording locally on the recording device; or sending the video recording and the audio recording to an external storage device (Paragraphs: 0016, 0039, and 0045: Desai discusses how videos stored on the device, and view content on the user for locally stored video; and how a video content can be recorded or stored for processing at a later time or can be captured as a live video stream during use of the camera; Desai also discusses how audio data are recorded from a plurality of microphones inside the camera body, i.e. storing the video recording and the audio recording locally on the recording device).
Desai discloses the invention set forth above but does not specifically stating “a video recording of a complete circumference”
Desai however discloses how a panoramic video content may include capturing content, recording or streaming video with the panoramic camera; and how a circular image of the sensor produced from the panoramic lens of the panoramic camera (Desai: Paragraphs: 0004, 0023 and 0030). Thus, it would have been obvious to one of ordinary skill in the art, that a circular image of the sensor produced from the panoramic lens of the panoramic camera; and the capturing, recording or streaming video with the panoramic camera allow the system to record a complete circumference video, as disclosed by Desai to make panoramic video universally compatible with display technology while maintaining high video image quality as disclosed by Desai.
Desai discloses the invention set forth above but does not specifically point out “receiving data from one or more additional recording devices that is networked together with the recording device to determine a participant location via echo location”
Cook however discloses receiving data from one or more additional recording devices that is networked together with the recording device to determine a participant location via echo location (Paragraphs: 0127, 0134 and 0159-0160: Cook discusses how the system determine the geometry of the rooms by echo location; and how devices transmit data from their microphones to the network if they detect an audio signal that has been emitted by one of the other devices in the network, for identifying the physical location of devices in the room. Cook also discusses how echolocation may also be used to detect the presence of a user in a room).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed before the effective filing date of the invention to modify the invention of Desai, and modify a system to receive data from one or more additional recording devices that is networked together with the recording device to determine a participant location via echo location, as taught by Cook, thus allowing to accurately calculating a position of the listener's ears in space and source of the noise, as discussed by Cook.
Considering claim 7, Desai discloses the method of claim 1, further comprising: selectively initiating capturing the video recording and the audio recording (Paragraphs: 0027, 0053 and 0057).
Considering claim 8, Desai discloses the method of claim 7, wherein selectively initiating capturing the video recording and the audio recording occurs upon receipt of a signal from a sensor within the recording device (Paragraphs: 0027 and 0053: a security system configured to monitor a facility or area for criminal activity).
Considering claim 9, Desai discloses the method of claim 1, wherein storing the video recording and the audio recording locally on the recording device occurs in real time (Paragraphs: 0039 and 0052).
Considering claim 10, Desai discloses the method of claim 1, wherein sending the video recording and the audio recording occurs in real time (Paragraphs: 0016 and 0026: Desai discusses how video content can be recorded or stored for processing at a later time or can be captured as a live video stream during use of the camera)
Considering claim 11, Desai discloses the method of claim 1, further comprising: performing both the storing step and the sending step (Paragraphs: 0021, 0032 and fig.1: Desai discusses how the processor receives, retrieve, and/or send data including data captured by the IR receiver and/or the camera to data storage).
Considering claim 12, Desai discloses the method of claim 11, wherein the sending step occurs after the storing step (Paragraphs: 0021, 0032 and fig.1).
Considering claim 13, Desai discloses the method of claim 11, wherein the sending step comprises transmitting the video recording and the audio recording via a SIM card inserted within a SIM card slot of the recording device (Paragraph: 0032: Desai discusses SD memory card or the like).
Considering claim 14, Desai discloses the method of claim 1, displaying a panoramic video in a side-by-side viewable display (abstract: lines 1-5: displaying panoramic content such as video content and image data with a panoramic camera system).
Considering claim 15, Desai discloses the method of claim 1, further comprising: transmitting the recorded audio from speakers on the recording device (Paragraphs: 0032 and 0066: transmitting and receiving data and video from the camera).
Considering claim 21, Cook further discloses the method of claim 1, further comprising: simultaneously processing, via an onboard processor, the video and the audio (Paragraphs: 0162-0163: Cook discusses how video cameras in the devices used to determine what was said by the user by lip-reading software in combination with audio data from the microphone).
3.Claim(s) 2-6, 18-20 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Desai et al. (US 20170195562) in view of Coven et al. (US 20190108044) and further in view of Cook et al. (US 20180350363).
Regarding claim 18, Desai discloses a recording device comprising: a panoramic camera system; a plurality of microphones (Paragraph: 0039: plurality of microphones); an antenna (Paragraph: 0022 and fig. A, 52: a wireless communication link such as WiFi or Bluetooth communication link or other radio frequency based link, i.e. via an antenna); and a processor communicatively coupled to the camera and the plurality of microphones configured for: recording simultaneous video from the panoramic camera system (Fig. A, 4, Desai discuses a panoramic camera with processor, storage and microphone); recording audio from the plurality of microphones (Paragraphs: 0039 and 0052: Desai discusses microphones with enhanced audio recording capabilities);
Desai discloses the invention set forth above but fail to disclose “instantaneous transcription and translation of recorded audio; automatically upload video, audio, and transcription to a cloud based medium”
Coven however discloses instantaneous transcription and translation of recorded audio; and automatically upload video, audio, and transcription to a cloud based medium (Paragraphs: 0068 and 0079: Coven discusses how an audio-to-text Skill receive an uploaded audio file as an input and then output information such as identified keywords, transcriptions, lyrics, etc. Similarly, a video labeling Skill may receive an uploaded video file as an input and then automatically integrated into the cloud-based collaboration platform as metadata associated with the content).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed before the effective filing date of the invention to modify the invention of Desai, and modify a system to instantaneous transcription and translation of recorded audio; and automatically upload video, audio, and transcription to a cloud based medium, as taught by Coven, thus allowing to share content management, and more particularly to techniques for integrating data processing technologies with a cloud-based collaboration platform, as discussed by Coven.
Desai in view of Coven discloses the invention set forth above but does not specifically point out “receiving data from one or more additional recording devices that is networked together with the recording device to determine a participant location via echo location”
Cook however discloses receiving data from one or more additional recording devices that is networked together with the recording device to determine a participant location via echo location (Paragraphs: 0127, 0134 and 0159-0160: Cook discusses how the system determine the geometry of the rooms by echo location; and how devices transmit data from their microphones to the network if they detect an audio signal that has been emitted by one of the other devices in the network, for identifying the physical location of devices in the room. Cook also discusses how echolocation may also be used to detect the presence of a user in a room).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed before the effective filing date of the invention to modify the invention of Desai and Coven, and modify a system to receive data from one or more additional recording devices that is networked together with the recording device to determine a participant location via echo location, as taught by Cook, thus allowing to accurately calculating a position of the listener's ears in space and source of the noise, as discussed by Cook.
Considering claim 2, Coven discloses the method of claim 1, further comprising: transcribing audio from the audio recording to text data (Paragraphs: 0068 and 0079: Coven discusses how an audio-to-text Skill receive an uploaded audio file as an input and then output information such as identified keywords, transcriptions, lyrics, etc.).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed before the effective filing date of the invention to modify the invention of Desai, and modify a system to transcribing audio from the audio recording to text data, as taught by Coven, thus allowing to share content management, and more particularly to techniques for integrating data processing technologies with a cloud-based collaboration platform, as discussed by Coven.
Considering claim 3, Coven discloses the method of claim 2, wherein transcribing audio from the audio recording to text data is performed locally on the recording device by one or more processors physically affixed thereto (Paragraphs: 0068, 0079 and 0190: Coven discusses how an audio-to-text Skill receive an uploaded audio file as an input and then output information such as identified keywords, transcriptions; and how the system operates as a standalone device).
Considering claim 4, Coven discloses the method of claim 3, further comprising: overlaying the video recording with the text data; and syncing the video recording, the text data, and the audio recording (Paragraphs: 0068 and 0079).
Considering claim 5, Coven discloses the method of claim 2, wherein the external storage device is included in an external computing system, and wherein transcribing audio from the audio recording to text data is performed by the external computing system (Paragraphs: 0079 and 0088: Coven discusses how the Google Cloud Speech API is configured to receive requests including uploaded audio files and to convert the uploaded audio files into textual information).
Considering claim 6, Coven discloses the method of claim 1, wherein capturing the video recording and capturing the audio recording occurs simultaneously (Paragraphs: 0061 and 0096: Coven discusses how the content item is a video clip, i.e. upon recording audio and video simultaneously).
Considering claim 19, Coven discloses the recording device of claim 18, further comprising: a plurality of speakers and the processor further configured to transmit previously recorded audio from the plurality of speakers (Paragraphs: 0079 and 0186: Coven discusses how call centers can be invoked to process the recorded call to determine the sentiment of the calls; and how the system process an audio file of the cloud-based collaboration platform by transmitting the audio file).
Considering claim 20, Coven discloses the recording device of claim 18, further comprising: a programmable memory capable of storing video, audio, and transcription until able to upload to a cloud based medium (Paragraphs: 0068 and 0079: Coven discusses how an audio-to-text Skill receive an uploaded audio file as an input and then output information such as identified keywords, transcriptions, lyrics, etc. Similarly, a video labeling Skill may receive an uploaded video file as an input and then automatically integrated into the cloud-based collaboration platform as metadata associated with the content).
Considering claim 22, Cook further discloses the recording device of claim 18, wherein the recording device is a wearable device (Paragraphs: 0143-0144: wearable device like a smart watch).
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-15 and 18-22 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicants argued the prior arts of the record fail to discloses camera system utilizing echo location, or determining a participant location. Examiner respectfully disagrees. The newly introduced art Cook et al discloses how the system determines the geometry of the rooms by echo location; and how devices transmit data from their microphones to the network if they detect an audio signal that has been emitted by one of the other devices in the network, for identifying the physical location of devices in the room. Cook also discusses how echolocation may also be used to detect the presence of a user in a room (Cook: Paragraphs: 0127, 0134 and 0159-0160). Therefore, the prior arts of the record disclosed the argued claims limitation.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YOSEF K LAEKEMARIAM whose telephone number is (571)270-5149. The examiner can normally be reached 9:30-6:30 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at (571) 272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YOSEF K LAEKEMARIAM/ Examiner, Art Unit 2691 12/22/2025