DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
1. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
2. Claims 1-20 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-6, 9-17 and 19-20 of U.S. Patent application No. (17/734,952). Although the conflicting claims are not identical, they are not patentably distinct from each other because they both claims similar methods and comprise almost identical steps (See claim below for comparison).
Application No. 18/780,182
Claim 1, a computing system comprising: a first camera; a second camera; a display; one or more processors; and one or more computer-readable hardware storage devices having stored thereon computer-executable instructions that are structured such that, when executed by the one or more processors, configure the computing system to perform independently at least: receive a user input initiating a two-way camera operation using both the first camera and the second camera; activate the first camera to capture a first video stream generated by the first camera; activate the second camera to capture a second video stream generated by the second camera; and display both the first video stream and the second video stream simultaneously on the display of the computing system, wherein because activations of the first camera and the second camera are independent, functions of each of the first camera and second camera can be started, stopped, paused, unpaused, or modified individually.
US patent application No. 17/734,952
Claim 1, a computing system comprising: a first camera; a second camera; a microphone; a display; one or more processors; and one or more computer-readable hardware storage devices having stored thereon computer-executable instructions that are structured such that, when executed by the one or more processors, configure the computing system to perform independently at least: receive a user input initiating a two-way camera operation using both the first camera and the second camera; activate the first camera and generate a first visualization, displaying a first image generated by the first camera; activate the second camera and generate a second visualization, displaying a second image generated by the second camera; and display both the first visualization and the second visualization simultaneously on the display of the computing system; calibrate a color balance of the first camera or the second camera based on a
reference color chart having a set of standard colors printed thereon, wherein calibrating
the color balance of the first camera or the second camera includes: taking a picture or a video of the reference color chart by the first camera or the second camera, comparing a set of colors in the picture with a correct set of colors corresponding to the set of standard colors printed on the reference color chart to determine whether the set of colors in the picture matches the correct set of colors, and in response to determining that the set of colors does not match the correct set of colors, adjusting the color balance of the first camera or the second camera, causing the first camera or the second camera to take pictures or videos with the correct set of colors; and
wherein because activations of the first camera and the second camera are independent, functions of each of the first camera and second camera can be started, stopped, paused, unpaused, or modified individually.
Application No. 18/780,182
Claim 12, a method implemented at a computing system for independently performing a two-way camera operation, the computing system comprising a first camera, a second camera, a display, and a computer-readable hardware storage device, the method comprising: receiving a user input initiating a two-way camera operation using both the first camera and the second camera; activating the first camera to capture a first video stream generated by the first camera; activating the second camera to capture a second video stream generated by the second camera; and displaying both the first video stream and the second video stream simultaneously on the display of the computing system, wherein because activations of the first camera and the second camera are independent, functions of each of the first camera and second camera can be started, stopped, paused, unpaused, or modified individually.
US patent application No. 17/734,952
Claim 12. a method implemented at a computing system for independently
performing a two-way camera operation, the computing system comprising a first camera, a
second camera, a microphone, a display, and a computer-readable hardware storage device,
the method comprising: receiving a user input initiating a two-way camera operation using both the first camera and the second camera; activating the first camera and generate a first visualization, displaying a first image generated by the first camera; activating the second camera and generate a second visualization, displaying a second image generated by the second camera; displaying both the first visualization and the second visualization simultaneously on the
display of the computing system, taking a sequence of pictures or a video of a monitor by the first camera or the second camera; identifying a refresh rate of the monitor; adjusting a frame rate of the first camera or the second camera based on the refresh rate of the monitor; and
taking a picture or a video of the monitor based on the adjusted frame rate, wherein because activations of the first camera and the second camera are independent, functions of each of the first camera and second camera can be started, stopped, paused, unpaused, or modified individually.
Application No. 18780182
20. A computer program product comprising one or more hardware storage devices having stored thereon computer-executable instructions that are structured such that, when the computer-executable instructions are executed by one or more processors of a computing system, the computing system comprising a first camera, a second camera, a microphone, and a display, the computer-executable instructions cause the computing system to perform independently at least: receiving a user input initiating a two-way camera operation using both the first camera and the second camera; activating the first camera to capture a first video stream generated by the first camera; activating the second camera to capture a second video stream generated by the second camera; and displaying both the first video stream and the second video stream simultaneously on the display of the computing system, wherein because activations of the first camera and the second camera are independent, functions of each of the first camera and second camera can be started, stopped, paused, unpaused, or modified individually.
US patent application No. 17/734,952
20. A computer program product comprising one or more hardware
storage devices having stored thereon computer-executable instructions that are structured
such that, when the computer-executable instructions are executed by one or more
processors of a computing system, the computing system comprising a first camera, a second
camera, a microphone, and a display, the computer-executable instructions cause the
computing system to perform independently at least: receive a user input initiating a two-way camera operation using both the first camera and the second camera; activate the first camera and generate a first visualization, displaying a first image generated by the first camera;
activate the second camera and generate a second visualization, displaying a second
image generated by the second camera; display both the first visualization and the second visualization simultaneously on the display of the computing system; receiving a second user input, indicating starting a video recording; in response to the second user input, simultaneously performing the following: recording a first video generated by the first camera, recording a second video generated by the second camera, and recording an audio generated by the microphone; and storing the first video, the second video, and the audio relationally in the one or more hardware storage device, take a sequence of pictures or a video of a monitor by the first camera or the second camera; identify a refresh rate of the monitor; adjust a frame rate of the first camera or the second camera based on the refresh rate of the monitor; and take a picture or a video of the monitor based on the adjusted frame rate, wherein because activations of the first camera and the second camera are independent, functions of each of the first camera and second camera can be started, stopped, paused, unpaused, or modified individually.
The subject matter claimed in the instant application is fully disclosed in the US patent application 17/734,952 and is covered by the patent since the patent and the application are claiming common subject matter, as follows:
The claimed invention in the instant application is fully disclosed in the patent and it is broader than the claimed invention in the patent application (17/734,952). No new invention or new improvement is being claimed in the instant application. Applicant is now attempting to claim broadly that which had been previously described in more detail in the claims of the patent (In re Van Ornum, 214 USPQ 761 CCPA 1982).
Furthermore, there is no apparent reason why Applicant was prevented from presenting claims corresponding to those of the instant application during prosecution of the application which matured into a patent.
Allowable Subject Matter
1.Claims 8-11 and 16-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 103
2.The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
3. Claim(s) 1-5, 12-14 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cobb et al. (US 20140307101).
Regarding claims 1 and 12, Cobb discloses a computing system comprising: a first camera (fig.1A, 102); a second camera (fig.1B, 104); a display; one or more processors (Paragraph: 0029: Cobb discusses a computing device with a microphone, display and processor); and one or more computer-readable hardware storage devices having stored thereon computer-executable instructions that are structured such that, when executed by the one or more processors (fig.7), configure the computing system to perform independently at least: receive a user input initiating a two-way camera operation using both the first camera and the second camera (Paragraphs: 0018-0019, 0029 and fig.1A-B: Cobb discusses a mobile device with microphone, speaker, I/O interface, front and rear cameras and display coupled to a processor through a communication channel, i.e. allowing to receive a user input to initiate a two-way camera operation using both the first camera and the second camera); activate the first camera to capture a first video stream generated by the first camera (Paragraphs: 0018-0019 and fig.1A, 102: Cobb discusses a mobile phone with a front facing camera capturing image and display on the screen, i.e. upon activating the first camera to capture a first video stream); activate the second camera to capture a second video stream generated by the second camera (Paragraphs: 0018-0021 and fig.1B, 104: a mobile phone with a rear facing camera (i.e. a second camera) capturing image and display on the screen); and display both the first video stream and the second video stream simultaneously on the display of the computing system (Paragraphs: 0005-0006, 0022-0023 and fig.3A-B: Cobb discusses how an image capturing devices on a display of the mobile device, wherein the recording and the displaying are performed simultaneously; and how the display of the mobile phone simultaneously displays an image of the field of view of the front and rear facing camera),
Cobb discloses the invention set forth above but does not specifically point out “wherein because activations of the first camera and the second camera are independent, functions of each of the first camera and second camera can be started, stopped, paused, unpaused, or modified individually”
Cobb however discloses how a continuous video stream recorded by the device switch back and forth between the fields of view of a front facing camera and a rear facing camera (i.e. the first and second camera); and how the mobile device cease/stop recording a first field of view upon receiving a command to switch to (and record) a second field of view (Cobb: Paragraphs: 0019 and 0024-0025). Thus, it would have been obvious to one of ordinary skill in the art to interpret the ceasing or stopping of the video recording by the first or front camera or rear or second camera upon receiving a command from a user, as activations of the first camera and the second camera are independent, functions of each of the first camera and second camera can be started, stopped, paused, unpaused, or modified individually; thus allowing to capture the images by using the integrated image capturing device in the mobile device in an effective manner, as disclosed by Cobb.
Regarding claim 20, Cobb discloses a computer program product comprising one or more hardware storage devices having stored thereon computer-executable instructions that are structured such that, when the computer-executable instructions are executed by one or more processors of a computing system (fig.7), the computing system comprising a first camera (fig.1A, 102), a second camera (fig.1B, 104), and a display (Paragraph: 0029: Cobb discusses a computing device with a microphone, display and processor), the computer-executable instructions cause the computing system to perform independently (Paragraph: 0024) at least: receiving a user input initiating a two-way camera operation using both the first camera and the second camera (Paragraphs: 0018-0019, 0029 and fig.1A-B: Cobb discusses a mobile device with microphone, speaker, I/O interface, front and rear cameras and display coupled to a processor through a communication channel, i.e. allowing to receive a user input to initiate a two-way camera operation using both the first camera and the second camera); activating the first camera to capture a first video stream generated by the first camera (Paragraphs: 0018-0019 and fig.1A, 102: Cobb discusses a mobile phone with a front facing camera capturing image and display on the screen, i.e. upon activating the first camera to capture a first video stream); activating the second camera to capture a second video stream generated by the second camera (Paragraphs: 0018-0021 and fig.1B, 104: a mobile phone with a rear facing camera (i.e. a second camera) capturing image and display on the screen); displaying both the first video stream and the second video stream simultaneously on the display of the computing system (Paragraphs: 0005-0006, 0022-0023 and fig.3A-B: Cobb discusses how an image capturing devices on a display of the mobile device, wherein the recording and the displaying are performed simultaneously; and how the display of the mobile phone simultaneously displays an image of the field of view of the front and rear facing camera); receiving a second user input, indicating starting a video recording (Paragraphs: 0006-0007 and 0025: Cobb discusses how a device record a video stream of images in response to user commands, i.e. obvious in respond to a first or second user input command); in response to the second user input, simultaneously performing the following: recording a first video generated by the first camera, recording a second video generated by the second camera, and recording an audio generated by the microphone (Paragraphs: 0022, 0026 and fig.3A-3B: Cobb discusses how a system allows separate adjustment of the audio record levels of different image capturing devices or cameras; and how the display of the mobile device simultaneously displays the images captured by at least two of the image capturing devices or cameras); and storing the first video, the second video, and the audio relationally in the one or more hardware storage device (Paragraphs: 0029-0031 and fig.7, 704: Cobb discusses a storage medium operated by the processor in the memory of the general purpose computing device),
Cobb discloses the invention set forth above but does not specifically point out “wherein because activations of the first camera and the second camera are independent, functions of each of the first camera and second camera can be started, stopped, paused, unpaused, or modified individually”
Cobb however discloses how a continuous video stream recorded by the device switch back and forth between the fields of view of a front facing camera and a rear facing camera (i.e. the first and second camera); and how the mobile device cease/stop recording a first field of view upon receiving a command to switch to (and record) a second field of view (Cobb: Paragraphs: 0019 and 0024-0025). Thus, it would have been obvious to one of ordinary skill in the art to interpret the ceasing or stopping of the video recording by the first or front camera or rear or second camera upon receiving a command from a user, as activations of the first camera and the second camera are independent, functions of each of the first camera and second camera can be started, stopped, paused, unpaused, or modified individually, as disclosed by Cobb.
Considering claims 2 and 14, Cobb discloses the method of claims 1 and 12, the method further comprising: receiving a second user input, indicating starting a video recording; in response to the second user input, simultaneously performing the following: recording a first video generated by the first camera, recording a second video generated by the second camera, and recording an audio generated by the microphone (Paragraphs: 0006, 0016 and 0022: Cobb discusses how a system allow a user of a mobile device having multiple integrated image capturing devices to view and/or record the fields of view of multiple image capturing devices simultaneously); and storing the first video, the second video, and the audio relationally in the computer-readable hardware storage device (Paragraphs: 0022-0023 and fig.7).
Considering claim 3, Cobb discloses the computing system of claim 2, the computing system further configured to: activate the microphone when the first camera or the second camera is activated; start an audio recording when the first video and the second video are simultaneously recorded; and store the recorded audio with the first video and the second video relationally in the one or more computer-readable hardware storage devices (Paragraphs: 0006, 0018-0019 and 0022-0023: recording and the displaying simultaneously from the first camera and second camera).
Considering claim 4, Cobb discloses the computing system of claim 1, wherein the computing system is configured to have a plurality of data channels, each configured to receive a data stream; any one of the plurality of data channels is configured to receive a video data stream from the first camera or the second camera (Paragraphs: 0018-0020, 0029 and fig.7: Cobb discusses how a physical device or subsystem that is coupled to a processor through a communication channel, i.e. it would have been obvious that a computing system to have a plurality of data channels).
Considering claim 5, Cobb discloses the computing system of claim 4, wherein the plurality of data channels are also configured to receive a data stream generated by an external device or stored in a storage, and display each data stream in one of a plurality of visualizations (Paragraphs: 0019, 0029-0031 and fig.7, 704: Cobb discusses a storage medium operated by the processor in the memory of the general purpose computing device).
Considering claim 13, Cobb discloses the method of claim 12, wherein the first visualization and the second visualization are displayed in a split-view manner, in which the first visualization and the second visualization are displayed side by side and cover a whole area of the display substantially (fig.6).
3. Claim(s) 6 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cobb et al. (US 20140307101) in view of Celmins et al. (US 20200265961)
Considering claims 6 and 15, Cobb fail to disclose claims 6 and 15. Celmins however discloses the computing system of claims 6 and 15, wherein the computing system further configured to: record a healthcare interaction as a first video dataset, a second video dataset, and an audio dataset, wherein the audio dataset contains a conversation between a healthcare provider and a patient during the healthcare interaction, the first video dataset contains actions of the healthcare provider during the healthcare interaction, the second video dataset contains actions of the patient, the first video is generated by the first camera, and the second video is generated by the second camera; and store the first video dataset, the second video dataset, and the audio dataset as an artifact associated with the customer (Paragraphs: 0007 and 0033: Celmins discusses how the telemedicine system includes a remote device coupled to the controller via the network; and how the remote device includes a camera, a display, a microphone, and a speaker. Celmins also discusses how the remote device is configured to establish a communication session with the controller: and how a patient arrives for a telehealth consultation with a remote physician, a nurse or medical assistant may open a medical record for the patient using the touchscreen interface and begin documenting the encounter, including entering the patient's current vital signs into the record before the remote physician is notified that the patient is ready to begin the consultation).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed before the effective filing date of the invention to modify the invention of Cobb, and modify a system wherein the computing system further configured to: record a healthcare interaction as a first video dataset, a second video dataset, and an audio dataset, wherein the audio dataset contains a conversation between a healthcare provider and a patient during the healthcare interaction, the first video dataset contains actions of the healthcare provider during the healthcare interaction, the second video dataset contains actions of the patient, the first video is generated by the first camera, and the second video is generated by the second camera; and store the first video dataset, the second video dataset, and the audio dataset as an artifact associated with the customer, taught by Celmins, thus providing real-time audio/video consultation between remote parties, as discussed by Celmins.
4. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cobb et al. (US 20140307101) in view of Tindall et al. (US 20030146982).
Considering claim 7, Cobb discloses the invention set forth above but fail to discloses claim 7. Tindall however discloses the computing system of claim 7, wherein the computing system further configured to calibrate a color balance of the first camera or the second camera based on a reference are color chart having a set of standard colors printed thereon (Paragraphs: 0008-0010 and fig.2: Tindall discusses how a system calibrate the color balance of a video camera; and how the system utilized a color balance reference to enables a given scene to be recorded with the video camera).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed before the effective filing date of the invention to modify the invention of Cobb, and modify a system wherein the computing system further configured to calibrate a color balance of the first camera or the second camera based on a reference are color chart having a set of standard colors printed thereon, as taught by Tindall, thus allowing to calibrate color balance of video cameras, as discussed by Tindall.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YOSEF K LAEKEMARIAM whose telephone number is (571)270-5149. The examiner can normally be reached 9:30-6:30 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached on (571) 272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YOSEF K LAEKEMARIAM/ Examiner, Art Unit 2691 01/20/2026