Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This office action is responsive to application No. 17/918,992 filed on 01/08/2026. Claim(s) 1-5, 7, 14, and 21 have been cancelled. Claim(s) 6, 8-13, 15-20, 22-32 is/are pending and have been examined.
Response to Arguments
Applicant’s arguments with respect to claim(s) 6, 8-13, 15-20, 22-32 have been considered but are moot in view of the new ground(s) of rejection.
Although a new ground(s) of rejection has been made, some of Applicant’s arguments need to be addressed.
Applicants assert on P.13 that “Takahashi is also silent about "combining the cut-out image with the game image to generate a distribution image based on the first timestamp and the second timestamp" because Takahashi does not consider synchronizing images. Neither Grubbs nor Takahashi, alone or in combination, disclose, teach or suggest "combining the cut-out image with the game image to generate a distribution image based on the first timestamp and the second timestamp" recited in claim 6. Accordingly, claim 6 is not obvious over the cited art, even if combining Grubbs with Takahashi.”
In response, the Examiner respectfully disagrees. Grubbs teaches in, including, but not limited to Paragraph 0034 teaches compositing engine is constructed to synchronize the first video stream and the second video stream by using time stamps included in the metadata of the first video stream and the second video stream. Paragraph 0062 teaches a number of live stream data assets 904 are depicted in an overlay setting where a first live video stream 922, a second live video stream 924 and a third live video stream 926 are all overlaid on a live program application stream. Paragraph 0172 teaches a platform ingest server (e.g., 1130 of FIG. 11A) receiving a first video stream (e.g., video game screen capture, webcam video, and the like). Paragraph 0206 teaches provides a video stream (to the platform ingest server 1130) that includes screen capture of game display output for a first user of a video game. Figs.6, 7, Paragraph 0057 clearly teaches a screen web cam overlaid on live display screen captures. Fig.10, Paragraph 0062 teaches cameras 922, 924, 926 overlaid on live program application stream that resembles video game content.
From the cited paragraphs and Office Action below, we see that the screen captures may also be video game screen captures. As evidenced in Figs.6, 7, 10, we see that the web cam video(s) are overlaid on screen captures. As screen captures may also be video game captures, Grubbs would teach combining the capture image with the game image to generate a distribution image based on the first timestamp and the second timestamp. We see that web cam screen may be overlaid on live screen captures. Where the streams may be synchronized using time stamps including in the streams. Only that the web cam image was not extracted as a cut-out image from the web cam image. Grubbs did not explicitly teach combining the cut-out image with the image. For which Takahashi teaches in Paragraph 0126-0131. The combination of Grubbs and Takahashi would result in a web cam image where the image was a cut-out image from the camera image, that is combined with the video game screen captures.
Therefore, based on the above, and the Office Action below. The combination of Grubbs, Xie, and Takahashi teach the claimed limitation(s).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 6, 8, 10-13, 15, 17-20, 22, 24-26, 28, 30, and 32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Grubbs et al. (US 2018/0035145), in view of Xie et al. (US 2021/0099733), and further in view of Takahashi et al. (US 2018/0091738).
Consider claims 6, 13, and 20, Grubbs teaches a server, method for operating a server, and a non-transitory computer-readable medium storing computer-readable instructions that, when executed by processing circuitry, cause a server to perform operations comprising: circuitry configured to perform operations (Figs.2, 3, 11A-B; Paragraph 0037, 0162, 0164; Fig.16, Paragraph 0208-0216) comprising:
acquiring a first stream, including a game image from an information processing apparatus connected to the server; a first timestamp corresponding to the game image to the first stream (Paragraph 0034 teaches time stamps included in metadata of the first video stream and the second video stream. Paragraph 0071 teaches video production platform system 1170 is communicatively coupled to at least one user device, e.g., 1110, 1120. Paragraph 0105 teaches at least one user device, e.g., 1110, 1120, includes at least one video device that is constructed to generate video data. At least one capture module, e.g., 1111, 1121, is communicatively coupled to at least one video device that is constructed to generate video data. Video data generated by at least one video device may correspond to a video game. Video data generated by at least one video device may correspond to screen capture data. Paragraph 0112 teaches ingest server 1130 receives a video stream from capture module of a first user device, e.g., the capture module 1111 of user device 1110. Paragraph 0143 teaches capture module 1111 is constructed to capture video data generated by a video device of the user device 1110, generate a first video stream, and provide the video stream to the platform ingest server 1130. Paragraph 0152 teaches the platform ingest server 1130 is constructed to receive a video stream from the capture modules 1111 and 1121. Paragraph 0172 teaches a platform ingest server, e.g., 1130 of FIG. 11A, receiving a first video stream, e.g., video game screen capture, webcam video, and the like. Paragraph 0206 teaches provides a video stream, to the platform ingest server 1130, that includes screen capture of game display output for a first user of a video game);
acquiring a second stream, including a camera image, from the information processing apparatus; a second timestamp corresponding to the camera image to the second stream (Paragraph 0034 teaches time stamps included in metadata of the first video stream and the second video stream. Paragraph 0071 teaches video production platform system 1170 is communicatively coupled to at least one user device, e.g., 1110, 1120. Paragraph 0105 teaches at least one user device, e.g., 1110, 1120, includes at least one video device that is constructed to generate video data. At least one capture module, e.g., 1111, 1121, is communicatively coupled to at least one video device that is constructed to generate video data. Video data generated by at least one video device is web cam data. Paragraph 0112 teaches ingest server 1130 receives a video stream from capture module of a first user device, e.g., the capture module 1111 of user device 1110. Paragraph 0143 teaches capture module 1111 is constructed to capture video data generated by a video device of the user device 1110, generate a first video stream, and provide the video stream to the platform ingest server 1130. Paragraph 0152 teaches the platform ingest server 1130 is constructed to receive a video stream from the capture modules 1111 and 1121. Paragraph 0172 teaches a platform ingest server, e.g., 1130 of FIG. 11A, receiving a first video stream, e.g., video game screen capture, webcam video, and the like);
processing a capture image from the camera image; combining the capture image with the game image to generate a distribution image based on the first timestamp and the second timestamp (Paragraph 0034 teaches compositing engine is constructed to synchronize the first video stream and the second video stream by using time stamps included in the metadata of the first video stream and the second video stream. Paragraph 0041 teaches processing system 110 may be a composite live stream digital video output from the user 102. Paragraph 0042 teaches processing system 110 may be responsible for the processing and formatting of the multiple live digital content streams. Paragraph 0054 teaches the components of engine 408 process incoming data from digital content and digital content streams, and once the processing is complete, a composite digital content stream output is sent by the engine 408. Fig.7, Paragraph 0057 teaches live digital content streams 602 can be streams or static data from multiple and varied input devices, such as a webcam, generated static text assets, images, live display screen captures, web assets, and multiple other digital content inputs. Fig.8, Paragraph 0058 teaches in a main production screen output area 702, a scene is created and displayed from any number of live digital content streams and digital content assets which may be represented with icons on a toolbar within an asset area 704, and which may be switched on/off. Assets and streams switched off are not displayed in the main production screen output area 702, whereas assets and streams switched on are displayed in the main production screen output area 702. Paragraph 0059 teaches a control 714 to add additional scenes, which can be created and manipulated from the number of digital content streams and digital content assets represented in asset area 704. Paragraph 0060 teaches main production screen output area 702 showing a scene having a live digital content stream including a first live camera video stream 716, a second live camera video stream 718, a first live program application stream 720, shown as an online gaming display, and a second live program application stream 722, also shown as an online gaming display. Paragraph 0057, 0060 teaches placement of the streams may be infinitely interchangeable and positionable. Fig.10, Paragraph 0062 teaches a number of live stream data assets 904 are depicted in an overlay setting where a first live video stream 922, a second live video stream 924 and a third live video stream 926 are all overlaid on a live program application stream. Paragraph 0119 teaches more than one video production project. Paragraph 0157 teaches compositing engine 1140 is constructed to generate a composite video stream that includes a video stream for a video project and data of a first asset, in accordance with scene information provided. Paragraph 0172 teaches a platform ingest server, e.g., 1130 of FIG. 11A, receiving a first video stream, e.g., video game screen capture, webcam video, and the like. Paragraph 0206 teaches provides a video stream, to the platform ingest server 1130, that includes screen capture of game display output for a first user of a video game. As taught in, including, but not limited to the paragraphs above, a plurality of scenes may be created [0059]. Where scenes may contain live camera video stream {camera image} and/or live program application stream shown as online gaming display {game image} [0060]. As screen captures may also be video game captures, including, but not limited to Figs.6,7,10 we can see that the live digital streams {camera image} can be overlayed on the screen capture/application stream {game image})); and
transmitting the distribution image to subset of image sharing servers of a set of image sharing servers (Paragraph 0080 teaches compositing engine 1140 is communicatively coupled to at least one broadcast ingest platform, e.g. 1153, 1154. Paragraph 0109 teaches compositing engine 1140 is constructed to provide a composite live video stream directly to broadcast ingest server, e.g., 1151. Paragraph 0157 teaches the compositing engine 1140 provides the composite video stream to a broadcast ingest server, e.g., 1152 configured for the video project by using user stream key information for each configured broadcast ingest server. In some embodiments, the platform system 1170 specifies broadcast ingest servers for a project. Paragraph 0083 teaches broadcast ingest servers include broadcast ingest servers of third-party streaming platform systems, such as for example, video streaming platform systems provided by Twitch.tv™, YouTube™, Facebook™, and the like).
Grubbs do not explicitly teach adding a first time stamp to the first stream;
adding a second timestamp to the second stream;
acquiring a second stream, including depth information based on the camera image;
processing a capture image from the camera image is extracting a cut-out image from the camera image based on the depth information;
combining the cut-out image with the image.
In an analogous art, Xie teaches adding a first time stamp to a first stream; adding a second timestamp to a second stream (Paragraph 0101, 0103, 0159, 0162 teaches adding time stamp information to a first stream and a second stream respectively).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Grubbs to include adding a first time stamp to a first stream; adding a second timestamp to a second stream, as taught by Xie, for the advantage of ensuring first and second stream data are played synchronously (Xie – Paragraph 0098), allowing the system to easily keep track and combine differing streams for playback.
Grubbs and Xie do not explicitly teach acquiring a second stream, including depth information based on the camera image;
processing a capture image from the camera image is extracting a cut-out image from the camera image based on the depth information;
combining the cut-out image with the image.
In an analogous art, Takahashi teaches acquiring a second stream, including depth information based on a camera image; processing a capture image from the camera image is extracting a cut-out image from the camera image based on the depth information; combining the cut-out image with the image (Paragraph 0126-0131).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Grubbs and Xie to include acquiring a second stream, including depth information based on a camera image; processing a capture image from the camera image is extracting a cut-out image from the camera image based on the depth information; combining the cut-out image with the image, as taught by Takahashi, for the advantage of accurately separately user image from the rest of the image, without requiring the use of special backgrounds, simplifying setup and space, providing the flexibility of easily including the user image on other image content.
Consider claims 8, 15, and 22, Grubbs, Xie, and Takahashi teach the operations further comprising:
acquiring, from the information processing apparatus, information for identifying the subset of image sharing servers from among the set of image sharing servers; and selecting, based on the information, the subset of image sharing servers from the set of image sharing servers (Grubbs - Paragraph 0042 teaches composited live streaming digital video output to the third-party streaming platform ingest service 112 can be accomplished through the user credentialing the processing system 110 for access to their respective account on the third-party streaming platform. Paragraph 0044 teaches user being credentialed through a user login system, such that user credentials may be used for access to accounts on respective third-party streaming platforms. Paragraph 0059 teaches an indicator 706 of the third party ingest platform to which a specific video broadcast project’s composite digital content stream will be sent. Paragraph 0080 teaches compositing engine 1140 is communicatively coupled to at least one broadcast ingest platform, e.g. 1153, 1154. Paragraph 0109 teaches compositing engine 1140 is constructed to provide a composite live video stream directly to broadcast ingest server, e.g., 1151. Paragraph 0157 teaches the compositing engine 1140 provides the composite video stream to a broadcast ingest server, e.g., 1152 configured for the video project by using user stream key information for each configured broadcast ingest server. In some embodiments, the platform system 1170 specifies broadcast ingest servers for a project. Paragraph 0083 teaches broadcast ingest servers include broadcast ingest servers of third-party streaming platform systems, such as for example, video streaming platform systems provided by Twitch.tv™, YouTube™, Facebook™, and the like).
Consider claims 10, 17, and 24, Grubbs, Xie, and Takahashi teach wherein combining the cut-out image and the game image based on the first timestamp and the second timestamp comprises superimposing the cut-out image on a portion of the game image (Grubbs - Paragraph 0034 teaches compositing engine is constructed to synchronize the first video stream and the second video stream by using time stamps included in the metadata of the first video stream and the second video stream. Paragraph 0041 teaches processing system 110 may be a composite live stream digital video output from the user 102. Paragraph 0042 teaches processing system 110 may be responsible for the processing and formatting of the multiple live digital content streams. Paragraph 0054 teaches the components of engine 408 process incoming data from digital content and digital content streams, and once the processing is complete, a composite digital content stream output is sent by the engine 408. Fig.7, Paragraph 0057 teaches live digital content streams 602 can be streams or static data from multiple and varied input devices, such as a webcam, generated static text assets, images, live display screen captures, web assets, and multiple other digital content inputs. Fig.8, Paragraph 0058 teaches in a main production screen output area 702, a scene is created and displayed from any number of live digital content streams and digital content assets which may be represented with icons on a toolbar within an asset area 704, and which may be switched on/off. Assets and streams switched off are not displayed in the main production screen output area 702, whereas assets and streams switched on are displayed in the main production screen output area 702. Paragraph 0059 teaches a control 714 to add additional scenes, which can be created and manipulated from the number of digital content streams and digital content assets represented in asset area 704. Paragraph 0060 teaches main production screen output area 702 showing a scene having a live digital content stream including a first live camera video stream 716, a second live camera video stream 718, a first live program application stream 720, shown as an online gaming display, and a second live program application stream 722, also shown as an online gaming display. Paragraph 0057, 0060 teaches placement of the streams may be infinitely interchangeable and positionable. Fig.10, Paragraph 0062 teaches a number of live stream data assets 904 are depicted in an overlay setting where a first live video stream 922, a second live video stream 924 and a third live video stream 926 are all overlaid on a live program application stream. Paragraph 0119 teaches more than one video production project. Paragraph 0157 teaches compositing engine 1140 is constructed to generate a composite video stream that includes a video stream for a video project and data of a first asset, in accordance with scene information provided. Paragraph 0172 teaches a platform ingest server, e.g., 1130 of FIG. 11A, receiving a first video stream, e.g., video game screen capture, webcam video, and the like. Paragraph 0206 teaches provides a video stream, to the platform ingest server 1130, that includes screen capture of game display output for a first user of a video game. As taught in, including, but not limited to the paragraphs above, a plurality of scenes may be created [0059]. Where scenes may contain live camera video stream {camera image} and/or live program application stream shown as online gaming display {game image} [0060]. As screen captures may also be video game captures, including, but not limited to Figs.6,7,10 we can see that the live digital streams {camera image} can be overlayed on the screen capture/application stream {game image}; Takahashi - Paragraph 0126-0131; Xie - Paragraph 0101, 0103, 0159, 0162).
Consider claims 11, 18, and 25, Grubbs, Xie, and Takahashi teach wherein the camera image comprises a first portion corresponding to the cut-out image and a second portion corresponding to a background image, wherein the combining the cut-out image with the game image based on the first timestamp and the second timestamp comprises overlaying the cut-out camera image on the game image, and wherein the operations further comprise preventing the second portion included in the camera image from being superimposed over the game image (Grubbs - Paragraph 0034 teaches compositing engine is constructed to synchronize the first video stream and the second video stream by using time stamps included in the metadata of the first video stream and the second video stream. Paragraph 0041 teaches processing system 110 may be a composite live stream digital video output from the user 102. Paragraph 0042 teaches processing system 110 may be responsible for the processing and formatting of the multiple live digital content streams. Paragraph 0054 teaches the components of engine 408 process incoming data from digital content and digital content streams, and once the processing is complete, a composite digital content stream output is sent by the engine 408. Fig.7, Paragraph 0057 teaches live digital content streams 602 can be streams or static data from multiple and varied input devices, such as a webcam, generated static text assets, images, live display screen captures, web assets, and multiple other digital content inputs. Fig.8, Paragraph 0058 teaches in a main production screen output area 702, a scene is created and displayed from any number of live digital content streams and digital content assets which may be represented with icons on a toolbar within an asset area 704, and which may be switched on/off. Assets and streams switched off are not displayed in the main production screen output area 702, whereas assets and streams switched on are displayed in the main production screen output area 702. Paragraph 0059 teaches a control 714 to add additional scenes, which can be created and manipulated from the number of digital content streams and digital content assets represented in asset area 704. Paragraph 0060 teaches main production screen output area 702 showing a scene having a live digital content stream including a first live camera video stream 716, a second live camera video stream 718, a first live program application stream 720, shown as an online gaming display, and a second live program application stream 722, also shown as an online gaming display. Paragraph 0057, 0060 teaches placement of the streams may be infinitely interchangeable and positionable. Fig.10, Paragraph 0062 teaches a number of live stream data assets 904 are depicted in an overlay setting where a first live video stream 922, a second live video stream 924 and a third live video stream 926 are all overlaid on a live program application stream. Paragraph 0119 teaches more than one video production project. Paragraph 0157 teaches compositing engine 1140 is constructed to generate a composite video stream that includes a video stream for a video project and data of a first asset, in accordance with scene information provided. Paragraph 0172 teaches a platform ingest server, e.g., 1130 of FIG. 11A, receiving a first video stream, e.g., video game screen capture, webcam video, and the like. Paragraph 0206 teaches provides a video stream, to the platform ingest server 1130, that includes screen capture of game display output for a first user of a video game. As taught in, including, but not limited to the paragraphs above, a plurality of scenes may be created [0059]. Where scenes may contain live camera video stream {camera image} and/or live program application stream shown as online gaming display {game image} [0060]. As screen capture/application streams naturally is made up of a background and the main game figure/character, there would be a portion making the main game portion and a background portion of the game. As can be seen in, including, but not limited to Fig.7, we see a web cam {camera image} overlayed onto one portion of the screen capture. Additionally, Fig.10 the live digital streams {camera image} is overlayed on the background image portion. Takahashi - Paragraph 0126-0131; Xie - Paragraph 0101, 0103, 0159, 0162).
Consider claims 12, 19, and 26, Grubbs, Xie, and Takahashi teach the first timestamp matches the second timestamp (Grubbs - Paragraph 0034).
Consider claims 28, 30, and 32, Grubbs, Xie, and Takahashi teach wherein at least one of the first stream or the second stream is a live stream (Grubbs – Paragraph 0041, 0057-0058, 0060, 0062).
Claim(s) 9, 16, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Grubbs et al. (US 2018/0035145), in view of Xie et al. (US 2021/0099733), in view of Takahashi et al. (US 2018/0091738), and further in view of Kedenburg, III (US 2019/0132650).
Consider claims 9, 16, and 23, Grubbs, Xie, and Takahashi teach wherein acquiring the first stream from the information processing apparatus comprises acquiring data associated with the game image (Grubbs - Paragraph 0034 teaches time stamps included in metadata of the first video stream and the second video stream. Paragraph 0105 teaches at least one user device, e.g., 1110, 1120, includes at least one video device that is constructed to generate video data. At least one capture module, e.g., 1111, 1121, is communicatively coupled to at least one video device that is constructed to generate video data. Video data generated by at least one video device may correspond to a video game. Video data generated by at least one video device may correspond to screen capture data. Paragraph 0112 teaches ingest server 1130 receives a video stream from capture module of a first user device, e.g., the capture module 1111 of user device 1110. Paragraph 0143 teaches capture module 1111 is constructed to capture video data generated by a video device of the user device 1110, generate a first video stream, and provide the video stream to the platform ingest server 1130. Paragraph 0152 teaches the platform ingest server 1130 is constructed to receive a video stream from the capture modules 1111 and 1121), and wherein transmitting the distribution image to the subset of image sharing servers comprises transmitting the data together with the distribution image to the subset of image sharing servers (Paragraph 0080 teaches compositing engine 1140 is communicatively coupled to at least one broadcast ingest platform, e.g. 1153, 1154. Paragraph 0109 teaches compositing engine 1140 is constructed to provide a composite live video stream directly to broadcast ingest server, e.g., 1151. Paragraph 0157 teaches the compositing engine 1140 provides the composite video stream to a broadcast ingest server, e.g., 1152 configured for the video project by using user stream key information for each configured broadcast ingest server. In some embodiments, the platform system 1170 specifies broadcast ingest servers for a project. Paragraph 0083 teaches broadcast ingest servers include broadcast ingest servers of third-party streaming platform systems, such as for example, video streaming platform systems provided by Twitch.tv™, YouTube™, Facebook™, and the like).
Grubbs, Xie, and Takahashi do not explicitly teach acquiring comprises acquiring content sound associated with the image, transmitting the content sound together with the distribution image.
In an analogous art, Kendenburg teaches acquiring comprises acquiring content sound associated with the image, transmitting the content sound together with the distribution image (Paragraph 0019, 0043).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Grubbs, Xie, and Takahashi to include acquiring comprises acquiring content sound associated with the image, transmitting the content sound together with the distribution image, as taught by Kendenburg, for the advantage of providing a robust system that enables live video broadcaster to share additional types of digital content in a live video broadcast (Kendenburg – Paragraph 0004), allowing the entirety of acquired contents to be arranged together, forming the composite stream, providing viewers with a more complete viewing and listening experience.
Claim(s) 27, 29, and 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Grubbs et al. (US 2018/0035145), in view of Xie et al. (US 2021/0099733), in view of Takahashi et al. (US 2018/0091738), and further in view of Lopez Hernandez (US 10,785,511).
Consider claims 27, 29, and 31, Grubbs, Xie, and Takahashi teach wherein the depth information is obtained (Takahashi – Paragraph 0069), but do not explicitly teach depth information is obtained using a stereo camera that is configured to provide the camera image.
In an analogous art, Lopez Hernandez teaches depth information is obtained using a stereo camera that is configured to provide the camera image (Col 14: lines 16-40 teaches non-infrared depth sensors, such as passive stereo camera pairs, may be used in place of, or in addition to, infrared light sources of depth sensor 448 to gather/determine depth information).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Grubbs, Xie, and Takahashi to include depth information is obtained using a stereo camera that is configured to provide the camera image, as taught by Lopez Hernandez, for the advantage of providing an integrated solution to capture desired information, streamlining components into one device, while providing greater accuracy.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON K LIN whose telephone number is (571)270-1446. The examiner can normally be reached on Monday-Friday 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached on 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASON K LIN/Primary Examiner, Art Unit 2425