Sesja 21: Warsztaty - Budowa kompleksowego rozwiązania AI
Capstone project - complete enterprise AI solution
🎯 Cele warsztatów
- Implementacja kompletnego enterprise AI solution
- Integracja wszystkich poznanych technologii AI
- Production deployment z monitoring i scaling
- Portfolio project demonstrujący mastery skills
🏗️ Capstone Project: Intelligent Business Platform
Project Requirements
GlobalTech Corporation potrzebuje kompletnej platformy AI dla automatyzacji procesów biznesowych:
Core Capabilities:
- Document Intelligence - automatyczne przetwarzanie wszystkich typów dokumentów
- Conversational AI - intelligent customer service i employee support
- Predictive Analytics - forecasting i risk assessment
- Content Generation - automated report creation i communication
- Knowledge Management - intelligent search across corporate knowledge
Technical Requirements:
- Multi-channel input - web, mobile, email, voice, document upload
- Real-time processing - <2s response time dla user interactions
- Scalability - support dla 10,000+ concurrent users
- Integration - SAP, Salesforce, Office 365, SharePoint
- Compliance - SOX, GDPR, industry regulations
- Availability - 99.9% uptime SLA
💻 Architecture Implementation
Complete System Architecture
from dataclasses import dataclass
from typing import Dict, List, Any, Optional
import asyncio
import json
from datetime import datetime
import uuid
@dataclass
class BusinessRequest:
request_id: str
user_id: str
channel: str # "web", "mobile", "email", "voice"
content_type: str # "text", "document", "image", "audio"
content: Any
business_context: Dict[str, Any]
priority: int = 1
@dataclass
class ProcessingResult:
request_id: str
status: str # "completed", "failed", "partial"
results: Dict[str, Any]
confidence_scores: Dict[str, float]
processing_time_ms: float
ai_services_used: List[str]
business_actions: List[Dict[str, Any]]
class IntelligentBusinessPlatform:
def __init__(self, config):
self.config = config
# Initialize wszystkie AI service layers
self.document_processor = DocumentIntelligenceService(config["document_ai"])
self.conversation_ai = ConversationalAIService(config["conversational_ai"])
self.predictive_analytics = PredictiveAnalyticsService(config["predictive_ai"])
self.content_generator = ContentGenerationService(config["content_ai"])
self.knowledge_manager = KnowledgeManagementService(config["knowledge_ai"])
# Business system integrations
self.sap_integration = SAPIntegrationService(config["sap"])
self.salesforce_integration = SalesforceIntegrationService(config["salesforce"])
self.office365_integration = Office365IntegrationService(config["office365"])
# Infrastructure services
self.request_router = IntelligentRequestRouter()
self.result_orchestrator = BusinessResultOrchestrator()
self.monitoring_system = ComprehensiveMonitoringSystem()
# Processing statistics
self.platform_stats = {
"total_requests": 0,
"successful_automations": 0,
"human_escalations": 0,
"average_processing_time": 0,
"business_value_generated": 0
}
async def process_business_request(self, request: BusinessRequest) -> ProcessingResult:
"""Main entry point dla business request processing"""
start_time = datetime.utcnow()
processing_id = request.request_id
print(f"🔄 Processing business request {processing_id}")
print(f" Channel: {request.channel}")
print(f" Content type: {request.content_type}")
print(f" User: {request.user_id}")
try:
# Step 1: Request routing i classification
routing_decision = await self.request_router.classify_and_route(request)
# Step 2: AI processing based na routing
ai_results = await self._execute_ai_processing_pipeline(
request, routing_decision
)
# Step 3: Business logic integration
business_actions = await self._execute_business_integration(
ai_results, request.business_context
)
# Step 4: Result orchestration i formatting
final_result = await self.result_orchestrator.finalize_result(
ai_results, business_actions, request
)
processing_time = (datetime.utcnow() - start_time).total_seconds() * 1000
# Update statistics
self.platform_stats["total_requests"] += 1
if final_result["automation_level"] > 0.8:
self.platform_stats["successful_automations"] += 1
self._update_average_processing_time(processing_time)
return ProcessingResult(
request_id=processing_id,
status="completed",
results=final_result,
confidence_scores=ai_results.get("confidence_scores", {}),
processing_time_ms=processing_time,
ai_services_used=ai_results.get("services_used", []),
business_actions=business_actions
)
except Exception as e:
processing_time = (datetime.utcnow() - start_time).total_seconds() * 1000
print(f"❌ Processing failed dla {processing_id}: {str(e)}")
# Error handling i escalation
await self._handle_processing_error(request, str(e))
return ProcessingResult(
request_id=processing_id,
status="failed",
results={"error": str(e)},
confidence_scores={},
processing_time_ms=processing_time,
ai_services_used=[],
business_actions=[{"action": "escalate_to_human", "reason": "processing_error"}]
)
async def _execute_ai_processing_pipeline(self, request: BusinessRequest,
routing_decision: Dict) -> Dict:
"""Execute AI processing pipeline based na routing decision"""
pipeline_results = {
"services_used": [],
"individual_results": {},
"confidence_scores": {},
"processing_steps": []
}
# Execute AI services based na routing decision
required_services = routing_decision["required_services"]
for service_name in required_services:
print(f"🤖 Executing {service_name}...")
try:
if service_name == "document_intelligence":
result = await self.document_processor.analyze_document(
request.content, request.business_context
)
elif service_name == "conversational_ai":
result = await self.conversation_ai.process_conversation(
request.content, request.business_context
)
elif service_name == "predictive_analytics":
result = await self.predictive_analytics.generate_predictions(
request.content, request.business_context
)
elif service_name == "content_generation":
result = await self.content_generator.generate_content(
request.content, request.business_context
)
elif service_name == "knowledge_search":
result = await self.knowledge_manager.search_knowledge(
request.content, request.business_context
)
else:
continue
pipeline_results["services_used"].append(service_name)
pipeline_results["individual_results"][service_name] = result
pipeline_results["confidence_scores"][service_name] = result.get("confidence", 0.5)
pipeline_results["processing_steps"].append({
"service": service_name,
"timestamp": datetime.utcnow().isoformat(),
"success": True
})
print(f"✅ {service_name} completed (confidence: {result.get('confidence', 0.5):.2f})")
except Exception as e:
print(f"❌ {service_name} failed: {str(e)}")
pipeline_results["processing_steps"].append({
"service": service_name,
"timestamp": datetime.utcnow().isoformat(),
"success": False,
"error": str(e)
})
# Cross-service result fusion
if len(pipeline_results["services_used"]) > 1:
print("🔗 Fusing cross-service results...")
fusion_result = await self._fuse_cross_service_results(
pipeline_results["individual_results"]
)
pipeline_results["fusion_analysis"] = fusion_result
return pipeline_results
async def _execute_business_integration(self, ai_results: Dict,
business_context: Dict) -> List[Dict]:
"""Execute business system integrations based na AI results"""
business_actions = []
# Determine required business actions based na AI results
for service, result in ai_results.get("individual_results", {}).items():
if service == "document_intelligence":
# Document processing actions
doc_actions = await self._handle_document_processing_result(result, business_context)
business_actions.extend(doc_actions)
elif service == "conversational_ai":
# Customer service actions
cs_actions = await self._handle_customer_service_result(result, business_context)
business_actions.extend(cs_actions)
elif service == "predictive_analytics":
# Predictive actions
pred_actions = await self._handle_predictive_analytics_result(result, business_context)
business_actions.extend(pred_actions)
# Execute business actions
executed_actions = []
for action in business_actions:
try:
execution_result = await self._execute_business_action(action)
executed_actions.append({
**action,
"execution_status": "completed",
"execution_result": execution_result,
"executed_at": datetime.utcnow().isoformat()
})
except Exception as e:
executed_actions.append({
**action,
"execution_status": "failed",
"error": str(e),
"executed_at": datetime.utcnow().isoformat()
})
return executed_actions
async def _handle_document_processing_result(self, doc_result: Dict,
business_context: Dict) -> List[Dict]:
"""Handle document processing results z business actions"""
actions = []
doc_type = doc_result.get("document_type", "unknown")
confidence = doc_result.get("confidence", 0)
if doc_type == "invoice" and confidence > 0.9:
# High confidence invoice - auto-process
actions.append({
"action_type": "sap_invoice_posting",
"target_system": "sap",
"data": doc_result["extracted_data"],
"automation_level": "full",
"requires_approval": doc_result["extracted_data"].get("amount", 0) > 10000
})
elif doc_type == "contract" and confidence > 0.8:
# Contract analysis - route dla legal review
actions.append({
"action_type": "legal_review_routing",
"target_system": "workflow_engine",
"data": {
"contract_summary": doc_result["summary"],
"risk_assessment": doc_result["risk_analysis"],
"entities": doc_result["legal_entities"]
},
"automation_level": "assisted",
"requires_approval": True
})
elif confidence < 0.7:
# Low confidence - human review
actions.append({
"action_type": "human_review_queue",
"target_system": "workflow_engine",
"data": doc_result,
"automation_level": "manual",
"priority": "high" if doc_type in ["contract", "compliance"] else "normal"
})
return actions
class IntelligentRequestRouter:
def __init__(self):
self.routing_rules = {
"document_upload": ["document_intelligence", "knowledge_search"],
"customer_inquiry": ["conversational_ai", "knowledge_search"],
"data_analysis_request": ["predictive_analytics", "content_generation"],
"content_creation": ["content_generation", "knowledge_search"],
"support_ticket": ["conversational_ai", "knowledge_search", "predictive_analytics"]
}
async def classify_and_route(self, request: BusinessRequest) -> Dict:
"""Classify request i determine routing"""
# Classify request type based na content i context
request_classification = await self._classify_request_type(request)
# Determine required AI services
required_services = self.routing_rules.get(
request_classification["primary_type"],
["conversational_ai"] # Default fallback
)
# Add conditional services based na content analysis
if request_classification.get("complexity", "low") == "high":
if "predictive_analytics" not in required_services:
required_services.append("predictive_analytics")
if request_classification.get("requires_generation", False):
if "content_generation" not in required_services:
required_services.append("content_generation")
routing_decision = {
"request_classification": request_classification,
"required_services": required_services,
"processing_priority": self._determine_priority(request, request_classification),
"estimated_processing_time": self._estimate_processing_time(required_services),
"automation_potential": self._assess_automation_potential(request_classification)
}
print(f"🎯 Routing decision: {routing_decision['required_services']}")
print(f" Priority: {routing_decision['processing_priority']}")
print(f" Automation potential: {routing_decision['automation_potential']:.1%}")
return routing_decision
🛠️ Workshop Implementation (120 min)
Phase 1: System Architecture (30 min)
# Complete system setup
async def setup_intelligent_business_platform():
"""Setup complete platform infrastructure"""
# Configuration dla wszystkich services
platform_config = {
"azure_openai": {
"endpoint": "https://your-openai.openai.azure.com/",
"api_key": "your-openai-key",
"deployments": {
"gpt-4-turbo": "gpt-4-turbo-deployment",
"gpt-35-turbo": "gpt-35-turbo-deployment",
"text-embedding": "text-embedding-deployment"
}
},
"azure_ai_services": {
"computer_vision": {
"endpoint": "https://your-vision.cognitiveservices.azure.com/",
"subscription_key": "your-vision-key"
},
"speech_services": {
"subscription_key": "your-speech-key",
"region": "eastus"
},
"language_services": {
"endpoint": "https://your-language.cognitiveservices.azure.com/",
"subscription_key": "your-language-key"
}
},
"azure_search": {
"endpoint": "https://your-search.search.windows.net",
"admin_key": "your-search-key",
"index_name": "corporate-knowledge"
},
"business_systems": {
"sap": {
"base_url": "https://your-sap-system.com",
"client_id": "sap-client-id",
"client_secret": "sap-secret"
},
"salesforce": {
"instance_url": "https://your-org.salesforce.com",
"access_token": "sf-token"
}
}
}
# Initialize platform
platform = IntelligentBusinessPlatform(platform_config)
# Run system health checks
health_status = await platform.run_comprehensive_health_check()
if health_status["overall_health"] == "healthy":
print("✅ Platform initialization successful")
return platform
else:
print(f"⚠️ Platform health issues detected: {health_status['issues']}")
return None
# Workshop participants implement this
workshop_platform = await setup_intelligent_business_platform()
Phase 2: Core Services Implementation (45 min)
class DocumentIntelligenceService:
def __init__(self, config):
self.form_recognizer = FormRecognizerClient(config)
self.custom_models = CustomModelRegistry(config)
async def analyze_document(self, document_data, business_context):
"""Comprehensive document analysis"""
# Multi-step document processing
analysis_pipeline = [
("classification", self._classify_document_type),
("extraction", self._extract_structured_data),
("validation", self._validate_business_rules),
("enrichment", self._enrich_with_business_context)
]
document_analysis = {
"document_id": str(uuid.uuid4()),
"processing_timestamp": datetime.utcnow().isoformat(),
"pipeline_results": {},
"overall_confidence": 0,
"business_validation": {}
}
current_data = document_data
for step_name, step_function in analysis_pipeline:
try:
step_result = await step_function(current_data, business_context)
document_analysis["pipeline_results"][step_name] = step_result
# Update data dla next step
if "processed_data" in step_result:
current_data = step_result["processed_data"]
print(f"✅ Document analysis step '{step_name}' completed")
except Exception as e:
document_analysis["pipeline_results"][step_name] = {
"status": "failed",
"error": str(e)
}
print(f"❌ Document analysis step '{step_name}' failed: {str(e)}")
# Calculate overall confidence
confidences = [
result.get("confidence", 0)
for result in document_analysis["pipeline_results"].values()
if isinstance(result, dict) and "confidence" in result
]
if confidences:
document_analysis["overall_confidence"] = sum(confidences) / len(confidences)
return document_analysis
class ConversationalAIService:
def __init__(self, config):
self.llm_client = AzureOpenAIClient(config)
self.conversation_memory = ConversationMemoryManager()
self.intent_classifier = IntentClassificationService()
async def process_conversation(self, user_input, business_context):
"""Process conversational input z business context"""
user_id = business_context.get("user_id", "anonymous")
session_id = business_context.get("session_id", str(uuid.uuid4()))
# Multi-step conversation processing
conversation_result = {
"session_id": session_id,
"user_input": user_input,
"processing_steps": {},
"response_generated": "",
"confidence": 0,
"requires_escalation": False
}
try:
# Step 1: Intent classification
intent_analysis = await self.intent_classifier.classify_intent(
user_input, business_context
)
conversation_result["processing_steps"]["intent"] = intent_analysis
# Step 2: Context retrieval
conversation_history = await self.conversation_memory.get_context(session_id)
relevant_knowledge = await self._retrieve_relevant_knowledge(
user_input, intent_analysis
)
# Step 3: Response generation
response = await self._generate_contextual_response(
user_input=user_input,
intent=intent_analysis,
conversation_history=conversation_history,
knowledge_context=relevant_knowledge,
business_context=business_context
)
conversation_result["response_generated"] = response["text"]
conversation_result["confidence"] = response["confidence"]
conversation_result["processing_steps"]["generation"] = response
# Step 4: Update conversation memory
await self.conversation_memory.update_conversation(
session_id, user_input, response["text"]
)
# Step 5: Determine if escalation needed
if (intent_analysis.get("confidence", 0) < 0.6 or
response["confidence"] < 0.7 or
intent_analysis.get("intent") == "complex_issue"):
conversation_result["requires_escalation"] = True
conversation_result["escalation_reason"] = "Low confidence or complex issue"
return conversation_result
except Exception as e:
conversation_result["processing_steps"]["error"] = str(e)
conversation_result["requires_escalation"] = True
conversation_result["escalation_reason"] = f"Processing error: {str(e)}"
return conversation_result
Phase 3: Business Integration (30 min)
class BusinessResultOrchestrator:
def __init__(self):
self.result_templates = {
"document_processed": self._format_document_result,
"conversation_completed": self._format_conversation_result,
"analysis_generated": self._format_analysis_result,
"content_created": self._format_content_result
}
async def finalize_result(self, ai_results: Dict, business_actions: List[Dict],
original_request: BusinessRequest) -> Dict:
"""Finalize i format results dla business consumption"""
# Determine primary result type
primary_service = self._determine_primary_service(ai_results["services_used"])
result_type = self._map_service_to_result_type(primary_service)
# Format using appropriate template
if result_type in self.result_templates:
formatted_result = await self.result_templates[result_type](
ai_results, business_actions, original_request
)
else:
# Generic formatting
formatted_result = await self._format_generic_result(
ai_results, business_actions, original_request
)
# Add platform metadata
formatted_result.update({
"platform_metadata": {
"request_id": original_request.request_id,
"processing_timestamp": datetime.utcnow().isoformat(),
"user_id": original_request.user_id,
"channel": original_request.channel,
"ai_services_used": ai_results["services_used"],
"automation_level": self._calculate_automation_level(business_actions),
"business_value_score": self._calculate_business_value(ai_results, business_actions)
}
})
return formatted_result
def _calculate_automation_level(self, business_actions: List[Dict]) -> float:
"""Calculate level of automation achieved"""
if not business_actions:
return 0.0
automated_actions = len([
action for action in business_actions
if action.get("automation_level") == "full"
])
assisted_actions = len([
action for action in business_actions
if action.get("automation_level") == "assisted"
]) * 0.5 # Partial automation credit
total_automation_score = automated_actions + assisted_actions
automation_level = total_automation_score / len(business_actions)
return min(automation_level, 1.0)
Phase 4: Testing i Deployment (15 min)
class PlatformTester:
def __init__(self, platform):
self.platform = platform
async def run_end_to_end_tests(self):
"""Comprehensive platform testing"""
test_scenarios = [
{
"name": "Document Upload Workflow",
"request": BusinessRequest(
request_id=str(uuid.uuid4()),
user_id="test_user_1",
channel="web",
content_type="document",
content={"document_url": "test_invoice.pdf"},
business_context={"department": "finance"}
),
"expected_services": ["document_intelligence"],
"expected_actions": ["sap_invoice_posting"]
},
{
"name": "Customer Support Conversation",
"request": BusinessRequest(
request_id=str(uuid.uuid4()),
user_id="customer_123",
channel="chat",
content_type="text",
content="I have an issue with my recent order",
business_context={"customer_tier": "premium"}
),
"expected_services": ["conversational_ai", "knowledge_search"],
"expected_actions": ["salesforce_case_creation"]
}
]
test_results = []
for scenario in test_scenarios:
print(f"\n🧪 Testing scenario: {scenario['name']}")
try:
# Execute test
result = await self.platform.process_business_request(scenario["request"])
# Validate results
validation = self._validate_test_result(result, scenario)
test_results.append({
"scenario": scenario["name"],
"status": "passed" if validation["valid"] else "failed",
"validation": validation,
"processing_time": result.processing_time_ms,
"automation_achieved": len([
action for action in result.business_actions
if action.get("automation_level") == "full"
])
})
status_emoji = "✅" if validation["valid"] else "❌"
print(f"{status_emoji} {scenario['name']}: {validation['summary']}")
except Exception as e:
test_results.append({
"scenario": scenario["name"],
"status": "error",
"error": str(e)
})
print(f"❌ {scenario['name']}: Error - {str(e)}")
# Summary
passed_tests = len([t for t in test_results if t["status"] == "passed"])
total_tests = len(test_results)
print(f"\n📊 Test Results: {passed_tests}/{total_tests} scenarios passed")
return {
"test_results": test_results,
"success_rate": passed_tests / total_tests,
"platform_ready": passed_tests == total_tests
}
✅ Zadania warsztatowe
Projekt główny: Complete Business AI Platform (90 min)
Deliverables:
- Full architecture - wszystkie layers i integrations (30 min)
- Core AI services - document, conversation, analytics (45 min)
- Business integration - SAP/Salesforce connectors (15 min)
Final Assessment
Technical Excellence (50 punktów)
- Architecture completeness (15 pkt)
- AI services integration (20 pkt)
- Business system integration (15 pkt)
Innovation i Creativity (30 punktów)
- Unique features implementation (15 pkt)
- User experience design (15 pkt)
Production Readiness (20 punktów)
- Error handling i monitoring (10 pkt)
- Scalability considerations (10 pkt)
🎓 Capstone Project Presentation
Presentation Requirements (15 min presentations)
Demo Structure:
- Business problem overview (2 min)
- Solution architecture (3 min)
- Live demo of key features (7 min)
- Technical challenges i solutions (2 min)
- Next steps i roadmap (1 min)
Evaluation Criteria:
- Technical sophistication - complexity i quality of implementation
- Business relevance - practical value dla enterprise use
- Innovation - creative use of AI technologies
- Presentation quality - clarity i professionalism
🏆 Rezultat warsztatów
Po ukończeniu uczestnicy będą mieli:
- Complete enterprise AI solution - portfolio-ready project
- End-to-end implementation experience - wszystkie layers i integrations
- Production deployment skills - real-world applicable knowledge
- Presentation i demo capabilities - professional showcasing skills
Congratulations! Jesteś teraz gotowy jako Azure AI Engineer! 🎉