Prosecution Insights
Last updated: April 19, 2026
Application No. 18/702,613

AREA PROFILES FOR DEVICES

Final Rejection §103
Filed
Apr 18, 2024
Examiner
MADAMBA, GLENFORD J
Art Unit
2451
Tech Center
2400 — Computer Networks
Assignee
Hewlett-Packard Development Company, L.P.
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
430 granted / 530 resolved
+23.1% vs TC avg
Strong +19% interview lift
Without
With
+19.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
19 currently pending
Career history
549
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 530 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to claim amendments and remarks filed by Applicant’s representative on November 20, 2025. Claims 1-15 are pending, no claims have been canceled, and NEW Claims 16-20 have been added. Response to Amendments and Remarks Applicant’s latest filed claim amendments and corresponding remarks dated November 20, 2025 have been fully considered. Applicant’s remarks and/or comments are generally directed to the current claim amendment(s), and accordingly deemed moot in light of the new grounds of rejection provided with this action. With regards to Applicant’s latest amendments and remarks, Applicant firstly notes and remarks that the independent claim(s), and particularly independent claim 1, has been further amended to now expressly recite “A device, comprising: a communication device; and a processor to: identify a plurality of devices within an area; collect video data and audio data from the plurality of devices within the area utilizing the communication device; generate an area profile for the area utilizing the video data and audio data from the plurality of devices; and provide the area profile to a communication application utilizing the communication device such that the communication application outside of the area views the video data and audio data from the plurality of devices as if the plurality of devices were a single user of the communication application”. With respect to the above, Applicant notes and remarks that none of the prior art reference(s) applied in rejecting independent claim 1 [Schirdewan et al, Crowe et al], either individually or in combination with other prior art disclosures, expressly and properly discloses or suggests the above amended claim feature(s) or limitation(s) as currently recited by amended independent claim 1 above (and similarly in independent claims 6 &12). In particular, Applicant states or remarks that the prior art of record does not appear to teach at least the now recited / amended feature of “provide the area profile to a communication application utilizing the communication device such that the communication application outside of the area views the video data and audio data from the plurality of devices as if the plurality of devices were a single user of the communication application”, and thus the amended independent claims are distinguishable over the cited prior art [Applicant Remarks: par 3, pg. 6 – par 1, pg. 9]. Accordingly, Applicant remarks that claims 1-15 are distinguishable from the applied prior art and/or prior art combination(s) used to reject the claims. However, in response to Applicant’s amended feature and associated remarks, the Office asserts and notes that the newly amended feature(s) are now expressly taught or disclosed in view of teachings and/or disclosures by at least Stokking et al, as discussed / cited below with this action. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schirdewan et al (hereinafter Schirdewan), US Patent 10027926 B1 (publication date July 2018) in view of Crowe et al (hereinafter Crowe), US Patent 10219008 B2 (publication date February 2019) and in further view of Stokking et al (hereinafter Stokking), US Patent Pub 20190045119 A1 (publication date February 2019). As per claim(s) 1, Schirdewan discloses particular recited feature(s) of the invention, such as a device (Schirdewan: e.g., Video Conference Endpoint [EP] 104) [col 2, L30-47; Fig. 1], comprising: a communication device Schirdewan: e.g., Network Interface Unit [NIU]_ 820, such as an Ethernet card or other interface device that allows the controller 800 to communicate over communication network 110) [col 12, L49-62; Fig. 8]; and a processor (Schirdewan: e.g., Processor 810) [col 12, L63-66; Fig. 1] to: identify a plurality of devices within an area (Schirdewan: e.g., as illustrated in FIG. 2, located ‘in proximity’ to the Video Conference Endpoint 104 (i.e., “within Conference room 200”) is Mobile device 204. As previously explained, the Mobile device 204 includes at least one camera 206 and at least one microphone 208…In the example of FIG. 2, Mobile device 204 may be ‘detected’ {identified} by the Video Conference Endpoint 104 when ‘in proximity’ to the Video Conference Endpoint 104. In one embodiment, the Video conference endpoint 104 may detect a mobile device 204 through a process referred to as “ultrasound detection.” The ultrasound detection uses a space-limited, inaudible, and unidirectional broadcast channel that conveys ‘connection information’ to Mobile devices 204 that are able to pick up the sound using, for example, the integrated microphone 208 of the Mobile device 204. The ‘connection information’ sent includes, but is not limited to, information that the Mobile device 204 may use to wirelessly connect (“pair”) to the video conference endpoint 104 or to establish communication with the video conference server 102 that is managing a video conference session in which the endpoint 104 is participating ) [col 3, L46 – col 4, L4; Fig. 2] (e.g., The ‘operations’ at 305, 310, 315, 320, and 325 of FIG. 3 may be performed prior to the establishment of a video conference session or during a video conference session that is managed by the video conference server 102. In addition, these ‘operations’ may be performed for ‘each Mobile Device 204 detected’ in proximity of the Video Conference Endpoint 104(1) {i.e., when there are multiple mobile devices 204 within proximity of the video conference endpoint 104(1) ) [col 6, L46-53; Fig. 3]; as well as the feature of collect video data and audio data from the plurality of devices within the area utilizing the communication device (Schirdewan: e.g., At 325, the video conference endpoint 104 may then ‘add’ the announced camera 206 of the mobile device 204 to the media sources available to the video conference endpoint 104(1) during a video conference session. The announced Camera 206 {of Mobile Device 204} may form a ‘primary’ or ‘secondary media source’ for the Video conference endpoint 104(1). Thus, the Video conference endpoint 104(1) may ‘add’ the announced camera(s) 206 of the mobile device 204 to the list of available cameras (e.g., {Main} cameras 112A and/or 112B) for the video conference endpoint 104(1)) [col 5, L40-56; Fig. 3]. But while Schirdewan discloses substantial features of the invention as above -- he does not expressly disclose the additional recited feature{s} of the method further comprising the step(s) of generate an area profile for the area utilizing the video data and audio data from the plurality of devices; and provide the area profile to a communication application utilizing the communication device . Nonetheless, the feature{s} is/are expressly disclosed by Crowe in a related endeavor. In particular, Crowe discloses the additional recited feature{s} of the method further comprising the step(s) of generate an area profile for the area utilizing the video data and audio data from the plurality of devices; and provide the area profile to a communication application utilizing the communication device (Crowe: e.g., Aspects of the subject disclosure or method includes, for example, receiving a ‘plurality of live video streams’ from a ‘plurality of communication devices’, the plurality of live video streams being associated with a ‘common event’ {i.e., live concert or sporting event}. Further aspects may include ‘aggregating the plurality of live video streams’ to generate a ‘composite video stream’ for presenting a ‘selectable viewing’ {defined / specified ‘viewing area’} of the common event and ‘providing the composite video stream to a device’ for presentation. Additional aspects may include ‘adjusting’ the composite video stream according to user generated-input to generate an ‘adjusted composite video stream’ {‘customized / specialized viewing area’, i.e., ‘targeted’, magnified, or ‘close-up’ view of the common event}, the user-generated input corresponds to a request to ‘adjust’ the {selected} presentation of the common event. Other aspects may include providing the ‘adjusted composite video stream’ to the device for ‘presentation’) [Abstract; col 1, L57– col 2, L5, col 3, L27-44 & col 4, L24-33; Fig. 1] (e.g., In one or more embodiments, in response to a request to adjust the composite video stream, the media content server 134 can select a portion of live video streams from communication devices 112, 116, 120, 124, and 126 to aggregate to generate the adjusted composite video stream. In some embodiments, each image of the adjusted composite video stream can includes an image of the selected moving object. In other embodiments, the request for adjusting the composite video stream can be for an ‘adjusted or magnified view’ of the Singer 104 of the concert event 102. Thus, the live video streams from communication devices 116 and 124 can provide a live video streams that include a magnified view of the singer 104, for example ) [col 7, L25-37; Fig. 3] (e.g., or example, at a car racing event, the composite video stream can be aggregated from multiple video streams, each of which can be from a communication device associated with a racing event attendee or a receiving venue. Further, a subscriber to the composite video stream of the car racing event can select a magnified or adjusted view of a particular race car (e.g. car no. 95) and adjust the composite video stream to provide a magnified or adjusted view of car no. 95 to the subscriber, accordingly. Thus, the subscriber can track car no. 95 individually from all the other cars participating in the racing event. In another example, at a football event, the composite video stream can be aggregated from multiple video streams, each of which can be from a communication device associated with a football game attendee or a game venue. Further, a subscriber to the composite video stream can select a magnified or adjusted view of a player (e.g. wide receiver) or object (e.g. football) and adjust the composite video stream to provide a magnified or adjusted view of player or object to the subscriber, accordingly) [col 8, L8-29; Figs. 4-5]. It would thus be obvious to one of ordinary skill in the art before the effective date of the invention to modify and/or combine Schirdewan’s invention with the above said additional feature, as expressly disclosed by Crowe, for the motivation of providing systems and methods for aggregating video streams from a plurality of communication devices viewing a common event and generating a composite media content [Abstract; col 1, L57– col 2, L5, col 3, L27-44 & col 4, L24-33; Fig. 1]. Further, while the combination of Schirdewan in view of Crowe discloses substantial features of the invention as above, they do not expressly disclose the additional recited feature{s} of the device further providing the area profile to a communication application utilizing the communication device “such that the communication application outside of the area views the video data and audio data from the plurality of devices as if the plurality of devices were a single user of the communication application”. Nonetheless, the feature{s} is/are expressly disclosed by Stokking in a related endeavor. In particular, Stokking discloses the additional recited feature{s} of the device further providing the area profile to a communication application utilizing the communication device “such that the communication application outside of the area views the video data and audio data from the plurality of devices as if the plurality of devices were a single user of the communication application” (Stokking: e.g., discloses as his invention a method and system for generating an ‘output video’, such as a ‘composite video’, from a plurality of video streams representing different recordings of a scene. The invention further relates to a computer program comprising instructions for causing a processor system to perform the method and generate the ‘output video’ {single / composite video}) [0001; Figs. 1 & 2] (e.g., it is known to generate a ‘composite video’ {single video} from a ‘plurality of video streams’ representing different recordings or captures of a scene. For example, the composite video of the scene may be generated from a combination of multiple videos recorded by multiple unsupervised (mobile) devices. In such a situation, it may be possible to create a ‘composite view’ of the scene by combining the recordings of the multiple devices. The composite view may be enhanced, in that it may provide a wider field of view of the scene, a higher spatial resolution, a higher frame rate, etc. In case of a wider field of view, such a composite video is often referred to as a panorama, and may involve a ‘stitching’ technique to process the individual and separate videos in such a way that they ‘jointly provide a ‘composite and panoramic video’) [0002-0003] (e.g., FIG. 1 shows a scene 020 being recorded by a plurality of recording devices 100-102. The field of view 110-112 of the respective recording devices is schematically indicated by dashed lines, indicating that the view of the scene 020 obtained by each of the recording devices 100-102 differs. As a result, different recordings of the scene may be obtained. The recording devices 100-102 may also function as stream sources, in that they may make their recording available as a (real-time) video stream. In accordance with the invention as claimed, a System 200 may be provided for “generating an ‘output video’ from the plurality of video streams”) [0082; Fig. 1] (e.g., FIG. 2 shows two Recording devices_100, 101, being in this example smartphones, recording a scene 020, each with their respective field of view 110, 111. The first smartphone 100 may capture a ‘first part’ of the scene and the second smartphone 101 may capture a ‘second part’ of the scene. There may be a significant overlap in the resulting captured frames 160, 162, shown as a light greyed and dark greyed area (160 left side and the middle and 162 right side only). Each of the smartphones may be connected to ‘Stitching Server’_202, sending their frames to the server and receiving instructions for pre-processing the content before sending further frames (not shown). When the Stitching Server 202 receives the captured frames 160, 162, it may analyze these frames as part of the ‘stitching’ process…) [0084; Fig. 2] (e.g., For example, in case the ‘output video’ is a ‘video panorama’, the known analysis may involve identifying keypoints in the video data of the different video streams and then mutually aligning the keypoints so as to generate {output} a ‘video panorama’…The processor 240 thus performs such an analysis and explicitly identifies the ‘contributing part’ of the video stream 130….In a non-limiting example, the ‘output video’ as generated by the System 200 may provide a ‘spatial composite’ of the plurality of video streams, such as a ‘video panorama’ ) [0088-0089; Fig. 3]. It would thus be obvious to one of ordinary skill in the art before the effective date of the invention to modify the combination of Schirdewan and Crowe with the above said additional feature, as expressly disclosed by Stokking, for the motivation of providing a method and system for generating an output video, such as a composite video, from a plurality of video streams representing different recordings of a scene [Abstract; 0001-0002; Figs. 1 & 2]. As per claim(s) 2, Schirdewan discloses the device wherein the processor is to restrict the communication device from collecting video and audio from devices outside the area (Schirdewan: e.g., as illustrated in FIG. 2, located ‘in proximity’ to the Video Conference Endpoint 104 (i.e., “within Conference room 200”) is Mobile device 204. As previously explained, the Mobile device 204 includes at least one camera 206 and at least one microphone 208…In the example of FIG. 2, Mobile device 204 may be ‘detected’ {identified} by the Video Conference Endpoint 104 when ‘in proximity’ to the Video Conference Endpoint 104. In one embodiment, the Video conference endpoint 104 may detect a mobile device 204 through a process referred to as “ultrasound detection.” The ultrasound detection uses a ‘space-limited’, inaudible, and unidirectional broadcast channel that conveys ‘connection information’ to mobile devices 204 that are able to pick up the sound using, for example, the integrated microphone 208 of the Mobile device 204. The ‘connection information’ sent includes, but is not limited to, information that the Mobile device 204 may use to wirelessly connect (“pair”) to the video conference endpoint 104 or to establish communication with the video conference server 102 that is managing a video conference session in which the endpoint 104 is participating {in this regard, the Office notes that only communication devices that are ‘within proximity’ of the Video Conference Endpoint 104 are able to receive the ‘connection information’ and establish a communication session with the Video Conference Endpoint 104 / Video Conference Server 102; thus, ‘collection of audio / video’ from the devices is ‘limited’ only to those devices that are “within proximity” and able to establish a connection with Video Conference Endpoint 104 / Video Conference Server 102 } ) [col 3, L46 – col 4, L4; Fig. 2]. As per claim(s) 3, Schirdewan discloses the device wherein the communication device is an ultrasonic communication device that utilizes a signal limited by barriers of the area (Schirdewan: e.g., as illustrated in FIG. 2, located ‘in proximity’ to the Video Conference Endpoint 104 (i.e., “within Conference room 200”) is Mobile device 204. As previously explained, the Mobile device 204 includes at least one camera 206 and at least one microphone 208…In the example of FIG. 2, Mobile device 204 may be ‘detected’ {identified} by the Video Conference Endpoint 104 when ‘in proximity’ to the Video Conference Endpoint 104. In one embodiment, the Video conference endpoint 104 may detect a Mobile device 204 through a process referred to as “ultrasound detection.” The ultrasound detection uses a ‘space-limited’, inaudible, and unidirectional broadcast channel that conveys ‘connection information’ to mobile devices 204 that are able to pick up the sound using, for example, the integrated microphone 208 of the Mobile device 204. The ‘connection information’ sent includes, but is not limited to, information that the Mobile device 204 may use to wirelessly connect (“pair”) to the video conference endpoint 104 or to establish communication with the video conference server 102 that is managing a video conference session in which the endpoint 104 is participating ) [col 3, L46 – col 4, L4; Fig. 2] . As per claim(s) 4, Schirdewan in view of Crowe in view of Stokking, and Crowe in particular, discloses the device wherein the area profile includes audio and video from each of the plurality of devices to be displayed within a single profile of the communication application (Crowe: e.g., Aspects of the subject disclosure or method includes, for example, receiving a ‘plurality of live video streams’ from a ‘plurality of communication devices’, the plurality of live video streams being associated with a ‘common event’ {i.e., live concert or sporting event}. Further aspects may include ‘aggregating the plurality of live video streams’ to generate a ‘Composite Video Stream’ for presenting a ‘selectable viewing’ {defined / specified ‘viewing area’} of the common event and providing the ‘composite video stream’ to a device’ for presentation. Additional aspects may include ‘adjusting’ the composite video stream according to user generated-input to generate an ‘adjusted composite video stream’ {‘customized / specialized viewing area’, i.e., ‘targeted’, magnified, or ‘close-up’ view of the common event}, the user-generated input corresponding to a request to ‘adjust’ the {selected} presentation of the common event. Other aspects may include providing the ‘adjusted composite video stream’ to the device for ‘presentation’) [Abstract; col 1, L57– col 2, L5, col 3, L27-44 & col 4, L24-33; Fig. 1] [also Figs. 2-3]. As per claim(s) 5, Schirdewan in view of Crowe in view of Stokking, and Crowe in particular, discloses the device wherein the processor is to generate a single image of the area utilizing the video data from the plurality of devices within the area (Crowe: e.g., Aspects of the subject disclosure or method includes, for example, receiving a ‘plurality of live video streams’ from a ‘plurality of communication devices’, the plurality of live video streams being associated with a ‘common event’ {i.e., live concert or sporting event}. Further aspects may include ‘aggregating the plurality of live video streams’ to generate a ‘Composite Video Stream’ for presenting a ‘selectable viewing’ {single image or ‘defined / specified viewing area’} of the common event and providing the ‘composite video stream’ to a device’ for presentation. Additional aspects may include ‘adjusting’ the composite video stream according to user generated-input to generate an ‘adjusted composite video stream’ {single image or ‘customized / specialized viewing area’, i.e., ‘targeted’, magnified, or ‘close-up’ view of the common event}, the user-generated input corresponding to a request to ‘adjust’ the {selected} presentation of the common event. Other aspects may include providing the ‘adjusted composite video stream’ to the device for ‘presentation’) [Abstract; col 1, L57– col 2, L5, col 3, L27-44 & col 4, L24-33; Fig. 1] (e.g., displayed / presented ‘composite media content’ / ‘adjusted composite media content’ {single image}) [col 5, L49 – col 6, L42; Figs. 2-3]. As per claim(s) 12, Schirdewan in view of Crowe in view of Stokking discloses a system, comprising: an ultrasonic wireless transmitter; a display device; and a processor to: determine a plurality of mobile devices within a defined area; instruct the ultrasonic wireless transmitter to collect audio data from microphones of the plurality of mobile devices; instruct the ultrasonic wireless transmitter to collect video data from cameras of the plurality of mobile devices; combine the audio data and the video data from the plurality of mobile devices; generate a profile for the defined area utilizing the combined audio data and video data for a communication session; and display the profile for the defined area on the display device with a plurality of profiles of remote devices. With regards to the claim, Claim 12 recites substantially the same features and/or limitations as those recited by the combination of claims 1 & 3 previously rejected above. The claim is also distinguishable only by its statutory category (system), and accordingly rejected on the same basis. As per claim(s) 16, Schirdewan in view of Crowe in view of Stokking, and Stokking in particular, discloses the device wherein the single image includes video data from each of the plurality of devices viewed individually within the area profile (Stokking: e.g., FIG. 2 shows two Recording devices_100, 101, being in this example smartphones, recording a scene 020, each with their respective field of view 110, 111. The first smartphone 100 may capture a ‘first part’ of the scene and the second smartphone 101 may capture a ‘second part’ of the scene. There may be a significant overlap in the resulting captured frames 160, 162, shown as a light greyed and dark greyed area (160 left side and the middle and 162 right side only). Each of the smartphones may be connected to ‘Stitching Server’_202, sending their frames to the server and receiving instructions for pre-processing the content before sending further frames (not shown). When the Stitching Server 202 receives the captured frames 160, 162, it may analyze these frames as part of the ‘stitching’ process…) [0084; Fig. 2] (e.g., For example, in case the ‘output video’ is a ‘video panorama’, the known analysis may involve identifying keypoints in the ‘video data of the different video streams’ and then mutually aligning the keypoints so as to generate {output} a ‘video panorama’…The processor 240 thus performs such an analysis and explicitly identifies the ‘contributing part’ of the video stream 130….In a non-limiting example, the ‘output video’ as generated by the System 200 may provide a ‘spatial composite’ of the plurality of video streams, such as a ‘video panorama’ ) [0088-0089; Fig. 3]. The motivation for the combining the prior art is the same as that given for claim 1 above. As per claim(s) 17, Schirdewan in view of Crowe in view of Stokking, and Stokking in particular, discloses the device wherein the single image includes video data from each of the plurality of devices stitched together (Stokking: e.g., FIG. 2 shows two Recording devices_100, 101, being in this example smartphones, recording a scene 020, each with their respective field of view 110, 111. The first smartphone 100 may capture a ‘first part’ of the scene and the second smartphone 101 may capture a ‘second part’ of the scene. There may be a significant overlap in the resulting captured frames 160, 162, shown as a light greyed and dark greyed area (160 left side and the middle and 162 right side only). Each of the smartphones may be connected to ‘Stitching Server’_202, sending their frames to the server and receiving instructions for pre-processing the content before sending further frames (not shown). When the Stitching Server 202 receives the captured frames 160, 162, it may analyze these frames as part of the ‘stitching’ process…) [0084; Fig. 2] (e.g., For example, in case the ‘output video’ is a ‘video panorama’, the known analysis may involve identifying keypoints in the video data of the different video streams and then mutually aligning the keypoints so as to generate {output} a ‘video panorama’…The processor 240 thus performs such an analysis and explicitly identifies the ‘contributing part’ of the video stream 130….In a non-limiting example, the ‘output video’ as generated by the System 200 may provide a ‘spatial composite’ of the plurality of video streams, such as a ‘video panorama’ ) [0088-0089; Fig. 3]. The motivation for the combining the prior art is the same as that given for claim 1 above. Claim(s) 6, 9, 10, 13, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schirdewan in view of Crowe in view of Stokking and in further of Sinharoy et al (hereinafter Sinharoy), US Patent 11924397 B2 (filing date July 2021). As per claim(s) 6, Schirdewan in view of Crowe in view of Stokking discloses particular recited feature(s) of the invention, such as a non-transitory memory resource storing machine-readable instructions stored thereon that, when executed, cause a processor of a computing device to: collect video data and audio data from the plurality of devices and ‘during a conference session’ (Schirdewan: e.g., At 325, the video conference endpoint 104 may then ‘add’ the announced camera 206 of the mobile device 204 to the media sources available to the video conference endpoint 104(1) ‘during a video conference session’. The announced Camera 206 {of Mobile Device 204} may form a ‘primary’ or ‘secondary media source’ for the Video conference endpoint 104(1). Thus, the Video conference endpoint 104(1) may ‘add’ the announced camera(s) 206 of the mobile device 204 to the list of available cameras (e.g., {Main} cameras 112A and/or 112B) for the video conference endpoint 104(1)) [col 5, L40-56; Fig. 3]; as well as the additional recited feature(s) of generate an area profile for the defined area; combine the video data and audio data for the plurality of devices; and transmit the combined video data and audio data for the plurality of devices as the area profile (Crowe: e.g., Aspects of the subject disclosure or method includes, for example, receiving a ‘plurality of live video streams’ from a ‘plurality of communication devices’, the plurality of live video streams being associated with a ‘common event’ {i.e., live concert or sporting event}. Further aspects may include ‘aggregating the plurality of live video streams’ to generate a ‘composite video stream’ for presenting a ‘selectable viewing’ {defined / specified ‘viewing area’} of the common event and ‘providing the composite video stream to a device’ for presentation. Additional aspects may include ‘adjusting’ the composite video stream according to user generated-input to generate an ‘adjusted composite video stream’ {‘customized / specialized viewing area’, i.e., ‘targeted’, magnified, or ‘close-up’ view of the common event}, the user-generated input corresponds to a request to ‘adjust’ the {selected} presentation of the common event. Other aspects may include providing the ‘adjusted composite video stream’ to the device for ‘presentation’) [Abstract; col 1, L57– col 2, L5, col 3, L27-44 & col 4, L24-33; Fig. 1] (e.g., or example, at a ‘car racing event’, the composite video stream can be aggregated from multiple video streams, each of which can be from a communication device associated with a racing event attendee or a receiving venue. Further, a subscriber to the composite video stream of the car racing event can select a magnified or adjusted view of a particular race car (e.g. car no. 95) and adjust the composite video stream to provide a magnified or adjusted view of car no. 95 to the subscriber, accordingly. Thus, the subscriber can track car no. 95 individually from all the other cars participating in the racing event. In another example, at a football event, the composite video stream can be aggregated from multiple video streams, each of which can be from a communication device associated with a football game attendee or a game venue. Further, a subscriber to the composite video stream can select a magnified or adjusted view of a player (e.g. wide receiver) or object (e.g. football) and adjust the composite video stream to provide a magnified or adjusted view of player or object to the subscriber, accordingly) [col 8, L8-29; Figs. 4-5]. But while Schirdewan in view of Crowe in view of Stokking discloses substantial features of the invention as above, they do not expressly disclose the additional recited feature{s} of the processor further performing the step(s) of provide an advertisement to a plurality of devices within a defined area. Nonetheless, the feature{s} is/are expressly disclosed by Sinharoy in a related endeavor. In particular, Sinharoy discloses the additional recited feature{s} of the processor further performing the step(s) of provide an advertisement to a plurality of devices within a defined area (Sinharoy: e.g., in certain embodiments, contextual advertisements may be ‘embedded’ by the system during live and VOD streaming of contents) [col 5, L1-15 & col 18, L1-3; Fig. 4d]. It would thus be obvious to one of ordinary skill in the art before the effective date of the invention to modify the combination with the above said additional feature, as expressly disclosed by Sinharoy, for the motivation of providing systems and methods for the generation and distribution of immersive media content from crowdsourced streams captured via distributed mobile devices [Abstract; col 1, L18-21; Fig. 1]. As per claim(s) 9, Schirdewan in view of Crowe in view of Stokking in view of Sinharoy discloses the memory resource wherein the processor is to: determine when an additional device enters the defined area; send an advertisement to the additional device; collect video data and audio data from the additional device; and combine the video data and the audio data from the additional device to the area profile for the defined area. With regards to the claim, Claim 9 recites substantially the same feature(s) and/or limitation(s) as those recited by the combination of claims 1 and 6, previously rejected above, and the claim is accordingly rejected on the same basis. As per claim(s) 10, 13, 14, Schirdewan in view of Crowe in view of Stokking in view of Sinharoy, and Crowe in particular, discloses the memory resource wherein the processor is to determine a location within the defined area of the plurality of devices and combine the video data from the plurality of devices to generate a video image of the defined area Crowe: e.g., Aspects of the subject disclosure or method includes, for example, receiving a ‘plurality of live video streams’ from a ‘plurality of communication devices’, the plurality of live video streams being associated with a ‘common event’ {i.e., live concert or sporting event}. Further aspects may include ‘aggregating the plurality of live video streams’ to generate a ‘composite video stream’ for presenting a ‘selectable viewing’ {defined / specified ‘viewing area’} of the common event and ‘providing the composite video stream to a device’ for presentation. Additional aspects may include ‘adjusting’ the composite video stream according to user generated-input to generate an ‘adjusted composite video stream’ {‘customized / specialized viewing area’, i.e., ‘targeted’, magnified, or ‘close-up’ view of the common event}, the user-generated input corresponds to a request to ‘adjust’ the {selected} presentation of the common event. Other aspects may include providing the ‘adjusted composite video stream’ to the device for ‘presentation’) [Abstract; col 1, L57– col 2, L5, col 3, L27-44 & col 4, L24-33; Fig. 1] (e.g., In one or more embodiments, in response to a request to adjust the composite video stream, the media content server 134 can select a portion of live video streams from communication devices 112, 116, 120, 124, and 126 to aggregate to generate the adjusted composite video stream. In some embodiments, each image of the adjusted composite video stream can includes an image of the selected moving object. In other embodiments, the request for ‘adjusting’ the composite video stream can be for an ‘adjusted or magnified view’ of the Singer 104 of the concert event 102. Thus, the live video streams from Communication devices 116 and 124 can provide a live video streams that include a magnified view of the Singer 104, for example ) [col 7, L25-37; Fig. 3] (e.g., or example, at a car racing event, the composite video stream can be aggregated from multiple video streams, each of which can be from a communication device associated with a racing event attendee or a receiving venue. Further, a subscriber to the composite video stream of the car racing event can select a ‘magnified or adjusted view’ of a ‘particular race car’ (e.g. “Car No. 95”) and adjust the composite video stream to provide a magnified or adjusted view of ‘Car No. 95’ to the Subscriber, accordingly. Thus, the Subscriber can ‘track’ Car no. 95 individually from all the other cars participating in the racing event. In another example, at a ‘football event’, the composite video stream can be aggregated from multiple video streams, each of which can be from a Communication Device associated with a football game attendee or a ‘game venue’. Further, a Subscriber to the composite video stream can select a ‘magnified or adjusted view’ of a ‘Player’ (e.g. Wide Receiver’) or ‘object’ (e.g. football) and ‘adjust’ the composite video stream to provide a ‘magnified or adjusted view of Player or object’ to the Subscriber, accordingly) [col 8, L8-29; Figs. 4-5]. Claim(s) 7, 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schirdewan in view of Crowe in view of Stokking in view of Sinharoy and in further view of Doyle et al (hereinafter Doyle), US Patent Pub US 20190026277 A1 (publication date January 2019). As per claim(s) 7, while the combination of Schirdewan, Crowe, Stokking and Sinharoy discloses particular recited feature(s) of the invention as in claim 6 above, they do not expressly recite the additional recited feature(s) of the memory resource wherein the processor is to determine when one of the plurality of devices leaves the defined area. Nonetheless, the feature{s} is/are expressly disclosed by Doyle in a related endeavor. In particular, Doyle discloses the additional recited feature of the memory resource wherein the processor is to determine when one of the plurality of devices leaves the defined area (Doyle: e.g., discloses a system for creating an Audio-Visual Recording of an Event by a plurality devices, the recorded A-V clips including metadata that includes, among others, ‘location data’ indicating a location of the respective audio-visual clip by one or more of the devices, wherein the event was recorded within a ‘predetermined range of locations’ of the predetermined event, and ‘selecting A-V clips’ that were recorded ‘within predetermined times and locations’ indicative of the predetermined event ) [0017; Fig. 1] (e.g., the ‘location data’ {of the recordings and accordingly the ‘location’ of the recording devices} may comprise at least one of ‘grid reference’, ‘latitude’ and/or ‘longitude’, ‘altitude’, ‘height’, ‘elevation’ and/or ‘floor of building’, ‘bearing and/or direction of view’ {i.e., via a magnetometer or similar device}, and/or ‘angle and/or inclination of view’ {i.e., via an accelerometer, gyroscope or similar device}) [0023] (e.g., The ‘metadata’ may also further comprise ‘movement data’, which may comprise at least one of velocity, speed, direction and/or course ) [0025]. It would thus be obvious to one of ordinary skill in the art before the effective date of the invention to modify the combination with the above said additional feature, as expressly disclosed by Doyle, for the motivation of providing a systems and method for the creation of an Audio-Visual Recording of an event by one or more devices {i.e., smartphones} [Abstract; 0002-00053; 0061; Fig. 1]. As per claim(s) 8, Schirdewan in view of Crowe in view of Stokking and in further view of Sinharoy and Doyle, and Doyle in particular, discloses the memory resource wherein the processor is to stop collecting video data and audio data from the one of the plurality of devices that leaves the defined area (Doyle: e.g., discloses a system for creating an Audio-Visual Recording of an Event by a plurality of devices {i.e., smartphones}, the recorded A-V clips including metadata that includes, among others, ‘location data’ indicating a location of the respective audio-visual clip by one or more of the devices, wherein the event was recorded within a ‘predetermined range of locations’ of the predetermined event, and ‘selecting A-V clips’ that were recorded ‘within predetermined times and locations’ indicative of the predetermined event {in this regard, the Office notes that Doyle is thus able to determine from among the AV clips have been recorded by the plurality of devices, which clips have been recorded within a predetermined time / location, and select {collect} only those recorded AV clips}) [0017; 0061; Fig. 1]. The motivation for the prior art combination is similar to that given above for claim 7. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schirdewan in view of Crowe in view of Stokking in view of Sinharoy and in further view of Lashmar et al (hereinafter Lashmar), US Patent Pub 20200219319 A1 (publication date July 2020). As per claim(s) 11, while the combination of Schirdewan, Crowe, Stokking and Sinharoy discloses particular recited feature(s) of the invention as in claim 6 above, they do not expressly recite the additional recited feature(s) of the memory resource wherein the advertisement provides access to a camera and a microphone of the plurality of devices to collect the video data and the audio data from the plurality of devices. Nonetheless, the feature{s} is/are expressly disclosed by Lashmar in a related endeavor. In particular, Lashmar discloses the additional recited feature of the memory resource wherein the advertisement provides access to a camera and a microphone of the plurality of devices to collect the video data and the audio data from the plurality of devices (Lashmar: e.g., Processing logic (e.g., 132) is configured to execute instructions of at least one software program to receive an ‘advertising request’ from a client device 102, 104, 106. An ‘ad request’ may be sent by a device upon the device having an ad play event for an initiated software application and/or upon initiation of an application in which ads may be presented. In some implementations, the ‘ad request’ may identify one or more of the user that is associated with the device and/or the application executing on the device through which the advertisement is to be presented, device type, device capabilities, the ‘access permission’ to various components of the device by the executing application {i.e., access to the device's camera, microphone, etc.}) [0026] [0054; Fig. 5]. It would thus be obvious to one of ordinary skill in the art before the effective date of the invention to modify the combination with the above said additional feature, as expressly disclosed by Lashmar, for the motivation of providing systems and methods for dynamically generating advertisements for presentation in an application executing on a client device, such as an application executing on a smart phone or tablet of a user. The described systems and methods select content items of specific content types based on determined user preference for content types as well as the device capabilities and ‘access permissions’ of the application through which the advertisement is to be presented. [Abstract, 0016; Fig. 1]. Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schirdewan in view of Crowe in view of Stokking and in further view of Mayerhofer et al (hereinafter Mayerhofer), US Patent Pub US 20060190616 A1 (publication date August 2006). As per claim(s) 15, while the combination of Schirdewan, Crowe, Stokking discloses particular recited feature(s) of the invention as in claim 12 above, they do not expressly recite the additional recited feature(s) of the system wherein the processor is to send a request to the plurality of mobile devices when the plurality of mobile devices are within the defined area, wherein the request includes an advertisement to log on to the profile for the defined area. Nonetheless, the feature{s} is/are expressly disclosed by Mayerhofer in a related endeavor. In particular, Mayerhofer discloses the additional recited feature(s) of the system wherein the processor is to send a request to the plurality of mobile devices when the plurality of mobile devices are within the defined area, wherein the request includes an advertisement to log on to the profile for the defined area (Mayerhofer: e.g., In FIG. 8C, a first user interface displays a ‘Nike specific advertisement’ with the podcast content wherein the user is also provides a Call button that permits the user to call Nike and order the shoes as shown in a user interface 166 ) [0157; Fig. 8c] (e.g., FIG. 9A is an example of a user interface for ‘logging’ into the system…A ‘website’ provides a ‘portal’ (HTML interface) for users to ‘find, organize, listen, and share content’. The website may also include self-service areas for Podcasters, Advertisers, and general corporate information. In order to access the main functions of the consumer portal (or the Podcaster or Advertiser areas), the user must ‘login’ using a username and password as shown in FIG. 9A) [0173; Fig. 9a]. It would thus be obvious to one of ordinary skill in the art before the effective date of the invention to modify the combination with the above said additional feature, as expressly disclosed by Mayerhofer, for the motivation of providing a system and method for aggregating, delivering and sharing of media content {i.e., audio content} [Abstract; 0007, 0012- 0013; Fig. 1]. Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schirdewan in view of Crowe in view of Stokking and in further of Sinharoy et al (hereinafter Sinharoy), US Patent 11924397 B2 (filing date July 2021) and Chang et al (hereinafter Chang), US Patent PUB 20030215218 A1 (pub date November 2003). As per claim(s) 18, while the combination of Schirdewan, Crowe, Stokking and Sinharoy discloses particular recited feature(s) of the invention above, they do not expressly disclose the additional recited feature{s} of the memory resource wherein the processor is to transmit the combined video data for the plurality of devices through a single virtual video driver, and to transmit the combined audio data for the plurality of devices through a single virtual audio driver. Nonetheless, the feature{s} is/are expressly disclosed by Chang in a related endeavor. In particular, Chang discloses the additional recited feature{s} of the memory resource wherein the processor is to transmit the combined video data for the plurality of devices through a single virtual video driver, and to transmit the combined audio data for the plurality of devices through a single virtual audio driver (Chang: e.g., Preferably, the video sources and the audio sources are ‘combined’ to form A/V sources for generating ‘A/V data’; the video data capture devices and the audio data capture devices are combined to form A/V data capture devices for receiving the A/V data from the A/V sources; the virtual video driver(s) and the virtual audio driver(s) are ‘combined’ to form ‘virtual A/V driver(s)’ each of which drives input A/V data to be processed in accordance with corresponding one of the second set of application programs) [0017; Fig. 1]. It would thus be obvious to one of ordinary skill in the art before the effective date of the invention to modify the combination with the above said additional feature, as expressly disclosed by Chang, for the motivation of providing a method and system for processing audio / video data provided from multiple sources, such as local and remote audio / visual sources, and which overcomes the limitations posed by a data process system that employs only physical drivers [Abstract; 0002, 0011; Fig. 1]. Claim(s) 19, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schirdewan in view of Crowe in view of Stokking and in further view of Kim et al (hereinafter Kim), US Patent PUB 20150020119 A1 (pub date January 2015). As per claim(s) 19, while the combination of Schirdewan, Crowe, Stokking discloses particular recited feature(s) of the invention above, they do not expressly disclose the additional recited feature{s} of the system wherein the processor is to display the profile for the defined area on the display device in a first window and to display the plurality of profiles of remote devices in a corresponding plurality of second windows. Nonetheless, the feature{s} is/are expressly disclosed by Kim in a related endeavor. In particular, Kim discloses the additional recited feature{s} of the system wherein the processor is to display the profile for the defined area on the display device in a first window (Kim: e.g., User Interface Video Stream_720) [0106; Fig. 7b] (e.g., Combined Frame_841) [0110; Fig. 8a] and to display the plurality of profiles of remote devices in a corresponding plurality of second windows (Kim: e.g., Slice Groups_721,722, 723 displayed in a personal user interface of User device 200) [0106; Fig. 7b] (e.g., Image Frames_801, Image Frame_811 & Image Frame_821 displayed within Combined Frame_841) [0109-0110; Fig. 8a]. It would thus be obvious to one of ordinary skill in the art before the effective date of the invention to modify the combination with the above said additional feature, as expressly disclosed by Kim, for the motivation of providing a system and method for providing a personalized user interface managing multimedia streams received in real time from one or more streaming sources [Abstract; 0003-00066; Figs. 1, 7b, 8a-b, 10]. As per claim(s) 20, Schirdewan in view of Crowe in view of Stokking in view of Sinharoy and in view of Kim, and Kim in particular, discloses the system wherein a size of the first window is substantially the same as respective sizes of the plurality of second windows (Kim: expressly discloses / illustrates in one aspect wherein Image Frame_801 is substantially the same in size as Image Frames_811, 821 & 831) [0112; Fig. 8b] (e.g., expressly discloses in one aspect where the size of all the plurality of multimedia streams displayed within Single Screen_1000 are substantially the same) [0117-0118; Fig. 10]. The motivation for combining the prior art is similar to that given in claim 18 above. Conclusion Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office Action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP 706.06(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GLENFORD J MADAMBA whose telephone number is (571) 272-7989. The examiner can normally be reached on Mondays to Fridays, 9am-5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Parry can be reached on 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 703-872-9306. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /GLENFORD J MADAMBA/Primary Examiner, Art Unit 2451
Read full office action

Prosecution Timeline

Apr 18, 2024
Application Filed
Aug 23, 2025
Non-Final Rejection — §103
Nov 20, 2025
Response Filed
Feb 03, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598221
CALL PROCESSING METHOD AND CALL PROCESSING APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12592976
REMOTE DESKTOP CONNECTION COMMUNICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12587493
GENERATIVE MACHINE LEARNING MODEL FOR PERSONALIZED KNOWLEDGE SESSION CONTENT
2y 5m to grant Granted Mar 24, 2026
Patent 12563111
APPLICATION ACCESS SIGNAL FOR VIDEOCONFERENCES
2y 5m to grant Granted Feb 24, 2026
Patent 12561043
METHOD AND DEVICE FOR DISPLAYING GRAPHIC OBJECT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+19.1%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 530 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month